scispace - formally typeset
Search or ask a question

Showing papers on "Standard error published in 1978"


Journal ArticleDOI
TL;DR: A mew approach for estimating metabolic rates for manual materials handling jobs is presented and showed a correlation coefficient of 0.95 between the measured and predicted metabolic rates.
Abstract: A new approach for estimating metabolic rates for manual materials handling jobs is presented. This approach was applied to 48 different jobs. The model validation showed a correlation coefficient of 0.95 between the measured and predicted metabolic rates. The coefficient of variation (standard error/sample mean) was 10.2 percent.

282 citations


Journal ArticleDOI
TL;DR: In this article, a regression of the form: where:Ri = rate of return on equity for firm i,Rf = the risk-free rate, andu = a white noise random variable.
Abstract: Much of the applied work in finance, for instance the literature on capital budgeting, assumes that a firm's management has an accurate estimate of the firm's beta. This estimate is presumably derived by running a regression of the form:where:Ri = rate of return on equity for firm i,Rf = the risk-free rate, andu = a white noise random variable.

34 citations



Journal ArticleDOI
TL;DR: At the conclusion of a symposium on the design of industrial experiments held in 1956, Tukey in some remarks entitled "Where Do The authors Go From Here?" made the following prediction:
Abstract: The adequacy of an estimator is usually measured by two criteria: degree of bias and the magnitude of the squared standard error (mean squared error). Least-squares estimators (where such exist) are both unbiased and have minimum standard error in the class of all estimators which are linear functions of the observations. Least­ squares estimators are often considered to be the "natural" estimators. But in situations in which the standard error of this type of estimator is very large, it would seem more natural to look for estimators which have smaller standard error at the expense of introducing some (controlled) amount of bias. At the conclusion of a symposium on the design of industrial experiments held in 1956, Tukey (18, pp. 82-83) in some remarks entitled "Where Do We Go From Here?" made the following prediction:

19 citations




Journal ArticleDOI
TL;DR: In this article, the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. And the trend detectability is discussed, both for the present network and for satellite measurements, using statistics from daily observations at Dobson stations from 40 to 60°N.
Abstract: Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time (area) averages depend on the temporal (spatial) variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements, using statistics from daily observations at Dobson stations from 40 to 60°N.

6 citations


Journal ArticleDOI
TL;DR: The authors showed that measurement error tends to lower the probability of rejecting such a null hypothesis by inducing a lower limiting value of the coefficient's t statistic, despite an ambiguous effect of measurement error on the estimated standard error.
Abstract: The well-known result that measurement error in an independent variable biases the least-squares estimator of that variable's regression coefficient towards zero is insufficient information to determine the effect of measurement error on the standard t test of the null hypothesis that the coefficient equals zero. This note shows that measurement error tends to lower the probability of rejecting such a null hypothesis by inducing a lower limiting value of the coefficient's t statistic. This result holds despite an ambiguous effect of measurement error on the coefficient's estimated standard error.

3 citations


Journal ArticleDOI
TL;DR: In this article, the authors used the chi square test to measure the internal sample variance of site density per sampling unit around the site density for the entire sample, which is a descriptive statistic, not an estimator.
Abstract: TWO OF PLOG'S REPEATED CRITICISMS-my dual failure to use efficiency and estimationare based on his own research needs in Oaxaxa (Plog 1976) and ignore the two-stage nature of my experiment. I was sampling from a known population, the parameters of which were empirically obtained. The population parameters and the sample frequencies were known, and the two values were compared during the first-stage-the chi square analysis. Plog seems to be overlooking the first stage of my design in his evaluation of the second stage-the ranking by economy of samples passing the chi square test. Plog's Oaxacan samples came from an unknown population, and the efficient estimation of population parameters was necessary to his design. Chi square and economy served the same purpose in my work as efficiency and estimation did in his design. Certainly, the methodological maxim that research objectives and available data condition the type of analytical tool that is used in a particular situation should be remembered. My use of sample variance is correct for several reasons. First, sample variance correctly measures what it is supposed to-the internal sample variance of site density per sampling unit around the site density for the entire sample. I was evaluating samples-not the population, as Plog's suggestion of standard error seems to imply. Secondly, sample variance is a descriptive statistic (not an estimator), as documented by its inclusion in Blalock's (1972) section entitled "Univariate Descriptive Statistics." Thirdly, none of the standard references-Kish (1965), Cochran (1963), and Blalock (1972)mentioned any change in the formula for sample variance with a changing sampling scheme. This is because of the descriptive nature of the statistic. However, Plog's standard error does change with the sampling scheme. Plog's explanation of the relationship between chi square and sampling fraction is useful and explains part of my anomalous results. However, there is another dimension: archaeological survey is primarily a sampling of the spatial population in order to detect and recover the cultural and environmental populations (Mueller 1974:62-63). Thus an increasing sampling fraction implies only a greater number of survey units, but not necessarily a greater number of elements of the cultural populations. In some cases, the number of sites or artifacts (elements of the cultural population) may increase directly with an increase in the sampling fraction; in other cases, many survey units without sites could cause a decrease in the site or artifact frequencies as the sampling fraction increases. Therefore, concomitant increases in the sampling fraction and frequencies of cultural elements is neither necessary nor universally true. Plog's three samples in his Table 1 portray it as necessary. My anomalous chi square results are obviously based on the irregularly changing cultural population (the first five variables in my Table 14), not the constantly increasing spatial population. Both the cultural population and the corresponding chi square values change in an irregular manner. That reason, combined with my use of a reducing, conservative correction factor, mitigates much of Plog's criticism. I do not think that the criticism is sufficiently substantial to warrant a categorical dismissal of my experimental results. A second surprising chi square result concerns the application of chi square to cluster samples. One would expect that the number of significant variables would increase radically because of the relative inefficiency of cluster sampling compared to simple random sampling (Blalock 1972; S. Plog this volume, and Mueller 1974). This expected result was not obtained. Four of the 47 cluster samples, or 8.51 %o, had more than one significant variable per sample. The corresponding figure for the other 270 probabilistic (but non-cluster) samples was 8.52% or 23 samples. This surprising similarity seems to support my claim that cluster sampling is an economic, as well as representative, method of surveying.

3 citations


01 Jan 1978
TL;DR: In this paper, the authors present the formulae needed to determine sample sizes that will minimise total scaling costs for both simple random sampling and stratified random sampling, using cost and production data that is representative of weigh scaling for a New Zealand Forest Service conservancy.
Abstract: The usual statistical sampling technique of choosing a sample size to produce an estimate with a specified error limit can be improved upon in situations where indirect costs resulting from estimation errors can be evaluated. The estimation of the weight to volume conversion factor in weigh scaling is such a situation and this paper presents the formulae needed to determine sample sizes that will minimise total scaling costs for both simple random sampling and stratified random sampling. Using cost and production data that is representative of weigh scaling for a New Zealand Forest Service conservancy, the minimum total scaling cost strategy is compared with the 2V2 percent error strategy in terms of sample sizes, variable and total scaling costs, and standard error attained. The comparisons illustrate the differences between the strategies when considering various stratum classifications and stumpage rates. The minimum total cost strategy produces significant savings compared to the current method and produces more accurate estimates (i.e. smaller standard error) of more valuable forest products which is an intuitively desirable characteristic.

2 citations



Journal ArticleDOI
TL;DR: In this paper, sampling characteristics of three estimators of the intraclass correlation were investigated under a variety of conditions within the context of a one-way three treatment level random effects analysis of variance.
Abstract: Some sampling characteristics of three estimators of the intraclass correlation were investigated under a variety of conditions within the context of a one-way three treatment level random effects analysis of variance. The results promote caution in the use of all three estimators since they show both a large negative bias under most conditions and a large standard deviation. The three estimators differed very little in their degree of bias or in the magnitude of their standard errors.

Journal ArticleDOI
TL;DR: In the one-sample Student t-test, the occurrence of type-I error is dependent on the estimates of the mean and standard deviation for a fixed sample size, n as discussed by the authors.
Abstract: In the one-sample Student t-test, the occurrence of a type-I error is dependent on the estimates of the mean and standard deviation for a fixed sample size, n. The statistic can achieve significance either by the sample mean being too different from the hypothesized mean or by the sample standard deviation being too small. The critical region is partitioned to determine the characteristics of samples in the critical region, assuming the null hypothesis is true. As might be conjectured from the use of the t-statistic, mis-estimation of the mean is shown to be the predominant characteristic of samples in the critical region for sample sizes larger than 20 and significance level greater than 0.01. Underestimation of the variance, unless accompanied by a misestimation of the mean, is a far less frequent characteristic of critical region samples.