scispace - formally typeset
Search or ask a question

Showing papers on "Resampling published in 1987"


Book
01 Jan 1987
TL;DR: The Delta Method and the Influence Function Cross-Validation, Jackknife and Bootstrap Balanced Repeated Replication (half-sampling) Random Subsampling Nonparametric Confidence Intervals as mentioned in this paper.
Abstract: The Jackknife Estimate of Bias The Jackknife Estimate of Variance Bias of the Jackknife Variance Estimate The Bootstrap The Infinitesimal Jackknife The Delta Method and the Influence Function Cross-Validation, Jackknife and Bootstrap Balanced Repeated Replications (Half-Sampling) Random Subsampling Nonparametric Confidence Intervals.

7,007 citations


Journal ArticleDOI
TL;DR: In this paper, the statistical significance of variance decomposition and impulse response function for unrestricted vector autoregressions is examined and two methods of computing such confidence intervals are developed: first, using a normal approximation; second, using bootstrapped resampling.
Abstract: This article questions the statistical significance of variance decompositions and impulse response functions for unrestricted vector autoregressions. It suggests that previous authors have failed to provide confidence intervals for variance decompositions and impulse response functions. Two methods of computing such confidence intervals are developed: first, using a normal approximation; second, using bootstrapped resampling. An example from Sims's work is used to illustrate the importance of computing these confidence intervals. In this example, the 95% confidence intervals for variance decompositions span up to 66 percentage points at the usual forecasting horizon.

584 citations


Journal ArticleDOI
TL;DR: Results of Monte Carlo simulations indicate that statistical bias and efficiency characteristics of the proposed test of spuriousness for structural data are very reasonable.

572 citations



Journal ArticleDOI
01 Dec 1987-Ecology
TL;DR: A distribution-free approach to the detection of density-dependence in the variation of population abundance, measured by a series of annual censuses, is reported, which shows that the randomization test is effective whether or not there is a marked trend in the observed data.
Abstract: We report a distribution-free approach to the detection of density-depen- dence in the variation of population abundance, measured by a series of annual censuses. The method uses the correlation coefficient between the observed population changes and population size and proposes a randomization procedure to define a rejection region for the hypothesis of density-independence. It is shown that the use of the proposed statistic under the randomization approach is equivalent to the likelihood ratio test for a particular family of time series models. The randomization test is compared with two other recently proposed tests. Using computer-generated density-independent and density-dependent data, it is shown that, unlike the other tests, the randomization test is effective whether or not there is a marked trend in the observed data. Arguments are presented showing how one of the other two tests can be further improved. Caution is urged in the use and interpretation of any test for detecting density-depen- dence in census data because (a) the tests depend on assumptions about population pro- cesses, (b) errors of measurement may lead to spurious detection of density-dependence.

244 citations



Book
01 Jan 1987
TL;DR: In this paper, the authors present a history, motivation, and controversy of the Pitman-Closest Estimator (PCE) criterion and compare it with other estimators.
Abstract: Preface Part I. Introduction 1. Evolution of Estimation Theory Least Squares Method of Moments Maximum Likelihood Uniformly Minimum Variance Unbiased Estimation Biased Estimation Bayes and Empirical Bayes Influence Functions and Resampling Techniques Future Directions 2. PMC Comes of Age PMC: A Product of Controversy PMC as an Intuitive Criterion 3. The Scope of the Book History, Motivation, and Controversy of PMC A Unified Development of PMC Part II. Development of Pitman's Measure of Closeness: 1. The Intrinsic Appeal of PMC Use of MSE Historical Development of PMC Convenience Store Example 2. The Concept of Risk Renyi's Decomposition of Risk How Do We Understand Risk? 3. Weakness in the Use of Risk When MSE Does Not Exist Sensitivity to the Choice of the Loss Function The Golden Standard 4. Joint Versus Marginal Information Comparing Estimators with an Absolute Ideal Comparing Estimators with One Another 5. Concordance of PMC with MSE and MAD Part III. Anomalies with PMC: 1. Living in an Intransitive World Round-Robin Competition Voting Preferences Transitiveness 2. Paradoxes Among Choice The Pairwise-Worst Simultaneous-Best Paradox The Pairwise-Best Simultaneous-Worst Paradox Politics: The Choice of Extremes 3. Rao's Phenomenom 4. The Question of Ties Equal Probability of Ties Correcting the Pitman Criterion A Randomized Estimator 5. The Rao-Berkson Controversy Minimum Chi-Square and Maximum Likelihood Model Inconsistency Remarks Part 4. Pairwise Comparisons 1. Geary-Rao Theorem 2. Applications of the Geary-Rao Theorem 3. Karlin's Corollary 4. A Special Case of the Geary-Rao Theorem Surjective Estimators The MLR Property 5. Applications of the Special Case 6. Transitiveness Transitiveness Theorem Another Extension of Karlin's Corollary Part V. Pitman-Closest Estimators: 1. Estimation of Location Parameters 2. Estimators of Scale 3. Generalization via Topological Groups 4. Posterior Pitman Closeness 5. Linear Combinations 6. Estimation by Order Statistics Part 6. Asymptotics and PMC 1. Pitman Closeness of BAN Estimators Modes of Convergence Fisher Information BAN Estimates are Pitman Closet 2. PMC by Asymptotic Representations A General Proposition 3. Robust Estimation of a Location Parameter L-Estimators M-Estimators R-Estimators 4. APC Characterizations of Other Estimators Pitman Estimators Examples of Pitman Estimators PMC Equivalence Bayes Estimators 5. Second-Order Efficiency and PMC Asymptotic Efficiencies Asymptotic Median Unbiasedness Higher-Order PMC Index Bibliography.

115 citations


Journal ArticleDOI
TL;DR: Unified methods for incorporating misclassification information and general variance expressions into analyses based on log-linear models and maximum likelihood estimation are presented.
Abstract: Misclassification is a common source of bias and reduced efficiency in the analysis of discrete data. Several methods have been proposed to adjust for misclassification using information on error rates (i) gathered by resampling the study population, (ii) gathered by sampling a separate population, or (iii) assumed a priori. We present unified methods for incorporating these types of information into analyses based on log-linear models and maximum likelihood estimation. General variance expressions are developed. Examples from epidemiologic studies are used to demonstrate the proposed methodology.

101 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the robustness of the Fisher-Pitman permutation test to violation of the homogeneity requirement and found that the difference between the sizes of the two tests is relatively small, except for extreme heterogeneity.
Abstract: The Fisher–Pitman permutation test is an increasingly popular alternative to the ANOVA F test. As a test of equality of distributions, the permutation test is very attractive because it retains its stated test size without any distributional requirements. As a test of equality of location parameters, however, the permutation test retains its stated test size only under equality of all nuisance parameters. This homogeneity requirement is well known, but often overlooked. As a result, the permutation test is sometimes recommended when, in fact, the distributional requirements are not satisfied. This article examines the robustness of the permutation test to violation of the homogeneity requirement. In particular, the size of the Fisher–Pitman permutation test of equality of means is compared to the size of the normal theory F test in small samples when variances are unequal. Normally distributed populations having equal means are assumed. The size of the permutation test is found to be smaller than the size of the F test when the ratio of the harmonic to the arithmetic mean of the sample sizes is small and vice versa when the ratio is large (i.e. near 1). In either case, the difference between the sizes of the two tests is relatively small, except for extreme heterogeneity. The result is based on a comparison of the moments of the permutation and normal theory sampling distributions and is supported by the results of simulation studies.

89 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that for a certain "local" parameter set where the signal to noise ratio is small, it is asymptotically possible to estimate the linear model parameters using the partial likelihood as well as if the transformation were known.
Abstract: Estimates of the linear model parameters in a linear transformation model with unknown increasing transformation are obtained by maximizing a partial likelihood. A resampling scheme (likelihood sampler) is used to compute the maximum partial likelihood estimates. It is shown that for a certain "local" parameter set where the "signal to noise ratio" is small, it is asymptotically possible to estimate the linear model parameters using the partial likelihood as well as if the transformation were known. In the case of the power transformation model with symmetric error distribution, this result is shown to also hold when the distribution of the error in the transformed linear model is unknown and is estimated. Monte Carlo results are used to show that for moderate sample size and small to moderate signal to noise ratio, the asymptotic results are approximately in effect and thus the partial likelihood estimates perform very well. Estimates of the transformation are introduced and it is shown that the estimates, when centered at the transformation and multiplied by $\sqrt{n}$, converge weakly to Gaussian processes.

78 citations


Journal ArticleDOI
TL;DR: The bootstrap method is applied here to the problem of estimating the standard deviation of the estimated midpoint and spread of a sensory-performance function based on data sets comprising 15–25 trials, and proved clearly superior to the incremental method.
Abstract: The bootstrap method, due to Bradley Efron, is a powerful, general method for estimating a variance or standard deviation by repeatedly resampling the given set of experimental data. The method is applied here to the problem of estimating the standard deviation of the estimated midpoint and spread of a sensory-performance function based on data sets comprising 15–25 trials. The performance of the bootstrap estimator was assessed in Monte Carlo studies against another general estimator obtained by the classical “combination-of-observations” or incremental method. The bootstrap method proved clearly superior to the incremental method, yielding much smaller percentage biases and much greater efficiencies. Its use in the analysis of sensory-performance data may be particularly appropriate when traditional asymptotic procedures, including the probittransformation approach, become unreliable.

Journal ArticleDOI
Hannu Oja1
TL;DR: In this article, distribution-free permutation tests and corresponding estimates for studying the effect of a treatment variable x on a response y were introduced, based on the assumption that the treatment values are assigned randomly to the subjects.
Abstract: Summary We introduce distribution-free permutation tests and corresponding estimates for studying the effect of a treatment variable x on a response y. The methods apply in the presence of a multivariate covariate z. They are based on the assumption that the treatment values are assigned randomly to the subjects.

Journal ArticleDOI
TL;DR: In this paper, several multivariate tests for differences of the mean which are based upon resampling schemes are examined in a series of Monte Carlo experiments and it is argued that the sensitivity of one with respect to the other depends upon the spatial correlation structure of the observed fields.
Abstract: Several multivariate tests for differences of the mean which are based upon resampling schemes are examined in a series of Monte Carlo experiments. We examine the power of these tests under two sets of experimental situations: one in which the resolution of the simulated observing network increases, and one in which the simulated observing network expands geographically with a fixed resolution The behavior of these essentially nonparametric tests is compared with classical multivariate tests and it is argued that the sensitivity of one with respect to the other depends upon the spatial correlation structure of the observed fields. The question of whether or not to reduce the dimensionality of the observed fields prior to conducting a statistical test is studied, as is the effect of temporal correlation upon tests based on resampling schemes.

Posted Content
TL;DR: In this paper, two methods of computing confidence intervals for variance decompositions and impulse response functions for unrestricted vector autoregressions are developed, one using a normal approximation, the other using bootstrapped resampling.
Abstract: The statistical significance of variance decompositions and impulse response functions for unrestricted vector autoregressions is questionable. Most previous studies are suspect because they have not provided confidence intervals for variance decompositions and impulse response functions. Here two methods of computing such intervals are developed, one using a normal approximation, the other using bootstrapped resampling. An example from Sims? work illustrates the importance of computing these confidence intervals. In the example, the 95 percent confidence intervals for variance decompositions span up to 66 percentage points at that usual forecasting horizon.

Journal ArticleDOI
TL;DR: In this article, a re-formulation of Oja's test statistic is given which has the advantages of ease of calculation, explicit formulas for permutation moments, and allowing a Beta distribution to be fitted to the exact null distribution.
Abstract: Summary Oja (1987) presents some distribution-free tests applicable in the presence of covariates when treatment values are randomly assigned. The formulas and calculations are cumbersome, however, and implementation of the tests relies on using a x2 approximation to the exact null distribution. In this paper a re-formulation of his test statistic is given which has the advantages of ease of calculation, explicit formulas for permutation moments, and allowing a Beta distribution to be fitted to the exact null distribution.

ReportDOI
01 Apr 1987
TL;DR: In this article, the authors consider the use of robust/resistant techniques in the distillation process, and consider the question of the proper error term are in no way automatically addressed by using a jackknife or bootstrap.
Abstract: : Resampling methods - - are in the process of becoming popular ways of assessing the standard error appropriate to some number chosen to distill, perhaps in a rather complex way, from data. An ever-present danger is that resampling will come to be thought of as a cure-all. The most that can reasonably be hoped for is that questions that do not arise in connection with the simplest distillates - - arithmetic means of samples - - need not be considered in connection with resampling applied to the results of more complicated calculations. This means that, in particular, (a) needs for the use of robust/resistant techniques in the distillation process, and (b) needs to consider the question of the proper error term are in no way automatically addressed by the use of jackknife or bootstrap. Robust/resistant techniques, if required, must be built into calculation of the final distillate from the observations, before that distillate is jackknifed or bootstrapped. A year of weather-related data offers a good platform for careful discussion.

01 Jan 1987
TL;DR: In this paper, the estimation of mean and standard errors of the eigenvalues and category quantifications in generalized non-linear canonical correlation analysis (OVERALS) is discussed, and both the jackknife and bootstrap methods are compared for providing finite difference approximations to the derivatives.
Abstract: The estimation of mean and standard errors of the eigenvalues and category quantifications in generalized non-linear canonical correlation analysis (OVERALS) is discussed. Starting points are the delta method equations. The jackknife and bootstrap methods are compared for providing finite difference approximations to the derivatives. Examining the basic properties of the jackknife method indicates that the vector of profile proportions is perturbed by leaving out single observations. The grid of perturbed values is used to estimate relevant derivatives. Bootstrapping means resampling with replacement from the original sample. Both procedures, bootstrapping and jackknifing, are used to compute pseudo-value means and standard errors for four different data sets: (1) the characteristics of 36 kinds of marine mammals; (2) data describing the attributes of 47 countries; (3) data from a study of school choice for 520 children leaving elementary school; and (4) a sample of 4,863 secondary students from the Second International Mathematics Study. For the small data sets the jackknife and bootstrap were used; for the larger sets, Monte Carlo versions of both were used. The jackknife method appeared less imprecise than the bootstrap method, and jackknife approximations were less stable for smaller samples. It is concluded that the bootstrap method performed better than did the jackknife method. For large samples, the bootstrap procedure works quite well for computing confidence intervals, and eigenvalues computed from OVERALS seem quite stable.

Journal ArticleDOI
E A Boeker1
TL;DR: Computer-based analytical and statistical techniques for extracting kinetic constants by fitting the integrated rate equation for reactions with stoichiometry, using the 21 progress curves described in the accompanying paper, appear to be normally distributed and do not correlate with the amount of product produced.
Abstract: The integrated rate equation for reactions with stoichiometry A----P + Q is: e0t = -Cf . ln(1-delta P/A0) + C1 delta P + 1/2C2(delta P)2 where the coefficients C are linear or quadratic functions of the kinetic constants and the initial substrate and product concentrations. I have used the 21 progress curves described in the accompanying paper [Cox & Boeker (1987) Biochem. J. 245, 59-65] to develop computer-based analytical and statistical techniques for extracting kinetic constants by fitting this equation. The coefficients C were calculated by an unweighted non-linear regression: first approximations were obtained from a multiple regression of t on delta P and were refined by the Gauss-Newton method. The procedure converged in six iterations or less. The bias in the coefficients C was estimated by four methods and did not appear to be significant. The residuals in the progress curves appear to be normally distributed and do not correlate with the amount of product produced. Variances for Cf, C1 and C2 were estimated by four resampling procedures, which gave essentially identical results, and by matrix inversion, which came close to the others. The reliability of C2 can also be estimated by using an analysis-of-variance method that does not require resampling. The final kinetic constants were calculated by standard multiple regression, weighting each coefficient according to its variance. The weighted residuals from this procedure were normally distributed.

Journal ArticleDOI
Kazuhiro Ohtani1
TL;DR: In this article, the problem of pooling variances from the viewpoint of testing linear restrictions on regression coefficients was examined from a sampling perspective, and it was shown that the bias in the size of the two-stage test may not be important.

Journal ArticleDOI
TL;DR: In this article, Liang proposed an alternative to the Mantel-Haenszel procedure for testing the significance of an estimated common odds ratio when the binomial assumption for the observed cell frequencies is relaxed.
Abstract: SUMMARY Liang (1985) proposed an alternative to the Mantel-Haenszel procedure for testing the significance of an estimated common odds ratio when the binomial assumption for the observed cell frequencies is relaxed It is shown that this procedure is a form of Fisher's one-sample permutation test

Patent
11 May 1987
TL;DR: In this article, the authors apply effective resampling phase correction with simple circuit constitution by adding (n-1) sets of analog holding circuits, where each analog holding circuit consists of a switch, a resistor and a capacitor, the switch is closed by a gate pulse to fetch an output signal (d) of the digital/analog conversion circuit 4 and the signal is held for a prescribed time until the next gate pulse (g) comes.
Abstract: PURPOSE:To apply effective resampling phase correction with simple circuit constitution by adding (n-1) sets of analog holding circuits. CONSTITUTION:The outputs of (n-1) sets of digital/analog converting circuits 4 is kept until the time comes when a resampling pulse is inputted to resampling gate circuit 6, 7, and the resampling gate circuit applies resampling at the same time to apply the phase correction by providing (n-1) sets of analog holding circuits 5. That is each analog holding circuit 5 consists of a switch, a resistor and a capacitor, the switch is closed by a gate pulse (g) to fetch an output signal (d) of the digital/analog conversion circuit 4 and the signal is held for a prescribed time until the next gate pulse (g) comes. The resampling gate circuit 7 uses a resampling pulse (e) to open the gate and to receive the output signal (d), resampling is applied to attain phase section and an R side PAM signal fR is outputted. The resampling gate circuit 6 uses the resampling pulse (e) to open the gate and to output an L side PAM signal whose phase is coincident with that of the R side PAM signal fR.


Book ChapterDOI
01 Jan 1987
TL;DR: The bootstrap is one of several resampling methods that have been developed recently by statisticians, one of which is the substitution of computational power for theoretical analysis.
Abstract: The bootstrap is one of several resampling methods that have been developed recently by statisticians. Bradley Efron (1982, pp. 2–3), who developed the bootstrap, notes that these methods “are prodigious computational spendthrifts.… An important theme…is the substitution of computational power for theoretical analysis. The payoff…is freedom from the constraints of traditional parametric theory, with its over-reliance on a small set of standard models for which theoretical solutions are available.…