scispace - formally typeset
Search or ask a question

Showing papers on "Resampling published in 2001"


BookDOI
01 Jan 2001
TL;DR: In this article, the authors present a case study in least squares fitting and interpretation of a linear model, where they use nonparametric transformations of X and Y to fit a linear regression model.
Abstract: Introduction * General Aspects of Fitting Regression Models * Missing Data * Multivariable Modeling Strategies * Resampling, Validating, Describing, and Simplifying the Model * S-PLUS Software * Case Study in Least Squares Fitting and Interpretation of a Linear Model * Case Study in Imputation and Data Reduction * Overview of Maximum Likelihood Estimation * Binary Logistic Regression * Logistic Model Case Study 1: Predicting Cause of Death * Logistic Model Case Study 2: Survival of Titanic Passengers * Ordinal Logistic Regression * Case Study in Ordinal Regrssion, Data Reduction, and Penalization * Models Using Nonparametic Transformations of X and Y * Introduction to Survival Analysis * Parametric Survival Models * Case Study in Parametric Survival Modeling and Model Approximation * Cox Proportional Hazards Regression Model * Case Study in Cox Regression

7,264 citations


Journal ArticleDOI
TL;DR: An integrated approach to fitting psychometric functions, assessing the goodness of fit, and providing confidence intervals for the function’s parameters and other estimates derived from them, for the purposes of hypothesis testing is described.
Abstract: The psychometric function relates an observer’s performance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. This paper, together with its companion paper (Wichmann & Hill, 2001), describes an integrated approach to (1) fitting psychometric functions, (2) assessing the goodness of fit, and (3) providing confidence intervals for the function’s parameters and other estimates derived from them, for the purposes of hypothesis testing. The present paper deals with the first two topics, describing a constrained maximum-likelihood method of parameter estimation and developing several goodness-of-fit tests. Using Monte Carlo simulations, we deal with two specific difficulties that arise when fitting functions to psychophysical data. First, we note that human observers are prone to stimulus-independent errors (orlapses). We show that failure to account for this can lead to serious biases in estimates of the psychometric function’s parameters and illustrate how the problem may be overcome. Second, we note that psychophysical data sets are usually rather small by the standards required by most of the commonly applied statistical tests. We demonstrate the potential errors of applying traditionalX2 methods to psychophysical data and advocate use of Monte Carlo resampling techniques that do not rely on asymptotic theory. We have made available the software to implement our methods.

2,263 citations


Journal ArticleDOI
TL;DR: This paper provides a summary of recent empirical and theoretical results concerning available methods and gives recommendations for their use in univariate and multivariate applications.
Abstract: The most appropriate strategy to be used to create a permutation distribution for tests of individual terms in complex experimental designs is currently unclear. There are often many possibilities, including restricted permutation or permutation of some form of residuals. This paper provides a summary of recent empirical and theoretical results concerning available methods and gives recommendations for their use in univariate and multivariate applications. The focus of the paper is on complex designs in analysis of variance and multiple regression (i.e., linear models). The assumption of exchangeability required for a permutation test is assured by random allocation of treatments to units in experimental work. For observational data, exchangeability is tantamount to the assumption of independent and identically distributed errors under a null hypothesis. For partial regression, the method of permutation of residuals under a reduced model has been shown to provide the best test. For analysis of variance, o...

1,240 citations


Book
01 Jan 2001
TL;DR: Fisheries and Modelling Fish Population Dynamics The Objectives of Stock Assessment Characteristics of Mathematical Models Types of Model Structure Simple Population Models Introduction Assumptions-Explicit and Implicit Density-Independent Growth Density -Dependent Models Responses to Fishing Pressure The Logistic Model in Fisheries Age-Structured Models Simple Yield-per-Recruit Model Parameter Estimation Models and Data Least Squared Residuals Nonlinear Estimation Likelihood Bayes' The
Abstract: Fisheries and Modelling Fish Population Dynamics The Objectives of Stock Assessment Characteristics of Mathematical Models Types of Model Structure Simple Population Models Introduction Assumptions-Explicit and Implicit Density-Independent Growth Density-Dependent Models Responses to Fishing Pressure The Logistic Model in Fisheries Age-Structured Models Simple Yield-per-Recruit Model Parameter Estimation Models and Data Least Squared Residuals Nonlinear Estimation Likelihood Bayes' Theorem Concluding Remarks Computer-Intensive Methods Introduction Resampling Randomization Tests Jackknife Methods Bootstrapping Methods Monte Carlo Methods Bayesian Methods Relationships between Methods Computer Programming Randomization Tests Introduction Hypothesis Testing Randomization of Structured Data Statistical Bootstrap Methods The Jackknife and Pseudo Values The Bootstrap Bootstrap Statistics Bootstrap Confidence Intervals Concluding Remarks Monte Carlo Modelling Monte Carlo Models Practical Requirements A Simple Population Model A Non-Equilibrium Catch Curve Concluding Remarks Characterization of Uncertainty Introduction Asymptotic Standard Errors Percentile Confidence Intervals Using Likelihoods Likelihood Profile Confidence Intervals Percentile Likelihood Profiles for Model Outputs Markov Chain Monte Carlo (MCMC) Conclusion Growth of Individuals Growth in Size von Bertalanffy Growth Model Alternatives to von Bertalanffy Comparing Growth Curves Concluding Remarks Stock Recruitment Relationships Recruitment and Fisheries Stock Recruitment Biology Beverton-Holt Recruitment Model Ricker Model Deriso's Generalized Model Residual Error Structure The Impact of Measurement Errors Environmental Influences Recruitment in Age-Structured Models Concluding Remarks Surplus Production Models Introduction Equilibrium Methods Surplus Production Models Observation Error Estimates Beyond Simple Models Uncertainty of Parameter Estimates Risk Assessment Projections Practical Considerations Conclusions Age-Structured Models Types of Models Cohort Analysis Statistical Catch-at-Age Concluding Remarks Size-Based Models Introduction The Model Structure Conclusion Appendix: The Use of Excel in Fisheries Bibliography Index

1,036 citations


Journal ArticleDOI
TL;DR: The present paper’s principal topic is the estimation of the variability of fitted parameters and derived quantities, such as thresholds and slopes, and introduces improved confidence intervals that improve on the parametric and percentile-based bootstrap confidence intervals previously used.
Abstract: The psychometric function relates an observer's performance to an independent variable, usually a physical quantity of an experimental stimulus Even if a model is successfully fit to the data and its goodness of fit is acceptable, experimenters require an estimate of the variability of the parameters to assess whether differences across conditions are significant Accurate estimates of variability are difficult to obtain, however, given the typically small size of psychophysical data sets: Traditional statistical techniques are only asymptotically correct and can be shown to be unreliable in some common situations Here and in our companion paper (Wichmann & Hill, 2001), we suggest alternative statistical techniques based on Monte Carlo resampling methods The present paper's principal topic is the estimation of the variability of fitted parameters and derived quantities, such as thresholds and slopes First, we outline the basic bootstrap procedure and argue in favor of the parametric, as opposed to the nonparametric, bootstrap Second, we describe how the bootstrap bridging assumption, on which the validity of the procedure depends, can be tested Third, we show how one's choice of sampling scheme (the placement of sample points on the stimulus axis) strongly affects the reliability of bootstrap confidence intervals, and we make recommendations on how to sample the psychometric function efficiently Fourth, we show that, under certain circumstances, the (arbitrary) choice of the distribution function can exert an unwanted influence on the size of the bootstrap confidence intervals obtained, and we make recommendations on how to avoid this influence Finally, we introduce improved confidence intervals (bias corrected and accelerated) that improve on the parametric and percentile-based bootstrap confidence intervals previously used Software implementing our methods is available

838 citations


Journal ArticleDOI
TL;DR: This work proposes a new technique for tracking moving target distributions, known as particle filters, which does not suffer from a progressive degeneration as the target sequence evolves.
Abstract: Markov chain Monte Carlo (MCMC) sampling is a numerically intensive simulation technique which has greatly improved the practicality of Bayesian inference and prediction. However, MCMC sampling is too slow to be of practical use in problems involving a large number of posterior (target) distributions, as in dynamic modelling and predictive model selection. Alternative simulation techniques for tracking moving target distributions, known as particle filters, which combine importance sampling, importance resampling and MCMC sampling, tend to suffer from a progressive degeneration as the target sequence evolves. We propose a new technique, based on these same simulation methodologies, which does not suffer from this progressive degeneration.

828 citations


Journal ArticleDOI
TL;DR: This study contrasts the effectiveness, in terms of power and type I error rates, of the Mantel test and PROTEST and illustrates the application of Procrustes superimposition to visually examine the concordance of observations for each dimension separately.
Abstract: The Mantel test provides a means to test the association between distance matrices and has been widely used in ecological and evolutionary studies. Recently, another permutation test based on a Procrustes statistic (PROTEST) was developed to compare multivariate data sets. Our study contrasts the effectiveness, in terms of power and type I error rates, of the Mantel test and PROTEST. We illustrate the application of Procrustes superimposition to visually examine the concordance of observations for each dimension separately and how to conduct hypothesis testing in which the association between two data sets is tested while controlling for the variation related to other sources of data. Our simulation results show that PROTEST is as powerful or more powerful than the Mantel test for detecting matrix association under a variety of possible scenarios. As a result of the increased power of PROTEST and the ability to assess the match for individual observations (not available with the Mantel test), biologists now have an additional and powerful analytical tool to study ecological and evolutionary relationships.

794 citations


Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the performance of the bootstrap resampling method for estimating model test statistic p values and parameter standard errors under non-normality data conditions.
Abstract: Though the common default maximum likelihood estimator used in structural equation modeling is predicated on the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to utilize distribution-free estimation methods. Fortunately, promising alternatives are being integrated into popular software packages. Bootstrap resampling, which is offered in AMOS (Arbuckle, 1997), is one potential solution for estimating model test statistic p values and parameter standard errors under nonnormal data conditions. This study is an evaluation of the bootstrap method under varied conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Accuracy of the test statistic p values is evaluated in terms of model rejection rates, whereas accuracy of bootstrap standard error estimates takes the form of bias and variability of the standard error estimates thems...

715 citations


Journal ArticleDOI
TL;DR: It is concluded that wavelet resampling may be a generally useful method for inference on naturally complex time series based on random permutation after orthogonal transformation of the observed time series to the wavelet domain.
Abstract: Even in the absence of an experimental effect, functional magnetic resonance imaging (fMRI) time series generally demonstrate serial dependence. This colored noise or endogenous autocorrelation typically has disproportionate spectral power at low frequencies, i.e., its spectrum is (1/f)-like. Various pre-whitening and pre-coloring strategies have been proposed to make valid inference on standardised test statistics estimated by time series regression in this context of residually autocorrelated errors. Here we introduce a new method based on random permutation after orthogonal transformation of the observed time series to the wavelet domain. This scheme exploits the general whitening or decorrelating property of the discrete wavelet transform and is implemented using a Daubechies wavelet with four vanishing moments to ensure exchangeability of wavelet coefficients within each scale of decomposition. For (1/f)-like or fractal noises, e.g., realisations of fractional Brownian motion (fBm) parameterised by Hurst exponent 0 < H < 1, this resampling algorithm exactly preserves wavelet-based estimates of the second order stochastic properties of the (possibly nonstationary) time series. Performance of the method is assessed empirically using (1/f)-like noise simulated by multiple physical relaxation processes, and experimental fMRI data. Nominal type 1 error control in brain activation mapping is demonstrated by analysis of 13 images acquired under null or resting conditions. Compared to autoregressive pre-whitening methods for computational inference, a key advantage of wavelet resampling seems to be its robustness in activation mapping of experimental fMRI data acquired at 3 Tesla field strength. We conclude that wavelet resampling may be a generally useful method for inference on naturally complex time series.

620 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare the distributions of the test statistics under various permutation methods and show that the partial correlations under permutation are asymptotically jointly normal with means 0 and variances 1.
Abstract: Summary Several approximate permutation tests have been proposed for tests of partial regression coefficients in a linear model based on sample partial correlations. This paper begins with an explanation and notation for an exact test. It then compares the distributions of the test statistics under the various permutation methods proposed, and shows that the partial correlations under permutation are asymptotically jointly normal with means 0 and variances 1. The method of Freedman & Lane (1983) is found to have asymptotic correlation 1 with the exact test, and the other methods are found to have smaller correlations with this test. Under local alternatives the critical values of all the approximate permutation tests converge to the same constant, so they all have the same asymptotic power. Simulations demonstrate these theoretical results.

532 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce a method for validation of results obtained by clustering analysis of data based on resampling the available data, and a figure of merit that measures the stability of clustering solutions against resample is introduced.
Abstract: We introduce a method for validation of results obtained by clustering analysis of data. The method is based on resampling the available data. A figure of merit that measures the stability of clustering solutions against resampling is introduced. Clusters that are stable against resampling give rise to local maxima of this figure of merit. This is presented first for a one-dimensional data set, for which an analytic approximation for the figure of merit is derived and compared with numerical measurements. Next, the applicability of the method is demonstrated for higher-dimensional data, including gene microarray expression data.

Journal ArticleDOI
TL;DR: In this article, a simple resampling method by perturbing the objective function repeatedly was proposed to estimate the covariance matrix of the estimator of a vector of parameters of interest, which can then be made based on a large collection of the resulting optimisers.
Abstract: Suppose that under a semiparametric setting an estimator of a vector of parameters of interest is obtained by optimising an objective function which has a U-process structure. The covariance matrix of the estimator is generally a function of the underlying density function, which may be difficult to estimate well by conventional methods. In this paper, we present a simple resampling method by perturbing the objective function repeatedly. Inferences of the parameters can then be made based on a large collection of the resulting optimisers. We illustrate our proposal by three examples with a heteroscedastic regression model.

Book ChapterDOI
TL;DR: Bootstrap as mentioned in this paper is a method for estimating the distribution of an estimator or test statistic by resampling one's data or a model estimated from the data, which is a practical technique that is ready for use in applications.
Abstract: The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data or a model estimated from the data. Under conditions that hold in a wide variety of econometric applications, the bootstrap provides approximations to distributions of statistics, coverage probabilities of confidence intervals, and rejection probabilities of hypothesis tests that are more accurate than the approximations of first-order asymptotic distribution theory. The reductions in the differences between true and nominal coverage or rejection probabilities can be very large. The bootstrap is a practical technique that is ready for use in applications. This chapter explains and illustrates the usefulness and limitations of the bootstrap in contexts of interest in econometrics. The chapter outlines the theory of the bootstrap, provides numerical illustrations of its performance, and gives simple instructions on how to implement the bootstrap in applications. The presentation is informal and expository. Its aim is to provide an intuitive understanding of how the bootstrap works and a feeling for its practical value in econometrics.

Journal ArticleDOI
TL;DR: Within-cluster resampling is proposed as a new method for analysing clustered data in this article, where the authors present theory for the asymptotic normality and provide a consistent variance estimator for the within-clusters estimator.
Abstract: Within-cluster resampling is proposed as a new method for analysing clustered data. Although the focus of this paper is clustered binary data, the within-cluster resampling asymptotic theory is general for many types of clustered data. Within-cluster resampling is a simple but computationally intensive estimation method. Its main advantage over other marginal analysis methods, such as generalised estimating equations (Liang & Zeger, 1986; Zeger & Liang, 1986) is that it remains valid when the risk for the outcome of interest is related to the cluster size, which we term nonignorable cluster size. We present theory for the asymptotic normality and provide a consistent variance estimator for the within-cluster resampling estimator. Simulations and an example are developed that assess the finite-sample behaviour of the new method and show that when both methods are valid its performance is similar to that of generalised estimating equations.

Book ChapterDOI
01 Jan 2001
TL;DR: One key element of MCF techniques is the recursive use of the importance sampling principle, which leads to the more precise name sequential importance sampling (SIS) for the techniques that are to be the focus of this article.
Abstract: Monte Carlo filters (MCF) can be loosely defined as a set of methods that use Monte Carlo simulation to solve on-line estimation and prediction problems in a dynamic system. Compared with traditional filtering methods, simple, flexible — yet powerful — MCF techniques provide effective means to overcome computational difficulties in dealing with nonlinear dynamic models. One key element of MCF techniques is the recursive use of the importance sampling principle, which leads to the more precise name sequential importance sampling (SIS) for the techniques that are to be the focus of this article.

Book ChapterDOI
16 Sep 2001
TL;DR: This paper presents a general importance sampling framework for the filtering/smoothing problem and shows how the standard techniques can be obtained from this general approach, and describes the role of MCMC resampling as proposed by Gilks and Berzuini and MacEachern, Clyde and Liu 1999.
Abstract: The particle filtering field has seen an upsurge in interest over recent years, and accompanying this upsurge several enhancements to the basic techniques have been suggested in the literature. In this paper we collect a group of these developments that seem to be particularly important for time series applications and give a broad discussion of the methods, showing the relationships between them. We firstly present a general importance sampling framework for the filtering/smoothing problem and show how the standard techniques can be obtained from this general approach. In particular, we show that the auxiliary particle filtering methods of (Pitt and Shephard: this volume) fall into the same general class of algorithms as the standard bootstrap filter of (Gordon et al. 1993). We then develop the ideas further and describe the role of MCMC resampling as proposed by (Gilks and Berzuini: this volume) and (MacEachern, Clyde and Liu 1999). Finally, we present a generalisation of our own in which MCMC resampling ideas are used to traverse a sequence of ‘bridging’ densities which lie between the prediction density and the filtering density. In this way it is hoped to reduce the variability of the importance weights by attempting a series of smaller, more manageable moves at each time step.

Journal ArticleDOI
TL;DR: A cluster methodology, motivated via density estimation, is proposed, based on the idea of estimating the population clusters, which are defined as the connected parts of the “substantial” support of the underlying density.

Proceedings ArticleDOI
09 Dec 2001
TL;DR: This paper shows three methods for incorporating the error due to input distributions that are based on finite samples, when calculating confidence intervals for output parameters, using finite samples.
Abstract: Stochastic simulation models are used to predict the behavior of real systems whose components have random variation. The simulation model generates artificial random quantities based on the nature of the random variation in the real system. Very often, the probability distributions occurring in the real system are unknown, and must be estimated using finite samples. This paper shows three methods for incorporating the error due to input distributions that are based on finite samples, when calculating confidence intervals for output parameters.

Book
01 Jan 2001
TL;DR: Permutation tests are a paradox of old and new as mentioned in this paper, where a test statistic is computed on the observed data, then the data are permuted over all possible arrangements of the data, an exact permutation test, or a moment approximation test.
Abstract: Permutation tests are a paradox of old and new. Permutation tests pre-date most traditional parametric statistics, but only recently have become part of the mainstream discussion regarding statistical testing. Permutation tests follow a permutation or 'conditional on errors' model whereby a test statistic is computed on the observed data, then 1 the data are permuted over all possible arrangements of the data-an exact permutation test; 2 the data are used to calculate the exact moments of the permutation distribution-a moment approximation permutation test; or 3 the data are permuted over a subset of all possible arrangements of the data-a resampling approximation permutation test. The earliest permutation tests date from the 1920s, but it was not until the advent of modern day computing that permutation tests became a practical alternative to parametric statistical tests. In recent years, permutation analogs of existing statistical tests have been developed. These permutation tests provide noteworthy advantages over their parametric counterparts for small samples and populations, or when distributional assumptions cannot be met. Unique permutation tests have also been developed that allow for the use of Euclidean distance rather than the squared Euclidean distance that is typically employed in parametric tests. This overview provides a chronology of the development of permutation tests accompanied by a discussion of the advances in computing that made permutation tests feasible. Attention is paid to the important differences between 'population models' and 'permutation models', and between tests based on Euclidean and squared Euclidean distances. WIREs Comp Stat 2011 3 527-542 DOI: 10.1002/wics.177

Journal ArticleDOI
TL;DR: This book is found to be a comprehensive discussion of methods that can be used to perform both machine and process capability studies, and readers, particularly statisticians, should be a worthwhile addition to their libraries.
Abstract: authors present procedures for three types of measurement studies. Readers will recognize that the Type 2 and 3 studies are variations of a standard R&R study approach. The chapter ends with some frequently asked questions about measurement system capability. In the Appendix, the authors give a shameless plug for the Q-DAS software that was used to generate most (if not all) of the graphical analysis in the text. The software looks good, but there are many statistical programs that offer the same analyses. The Appendix also contains a discussion of analysis of variance as applied to the three types of measurement studies covered in Chapter 8. Formulas for the probability distribution functions (pdf’s) and cumulative distribution functions (cdf’s) of the distributions discussed in the text are given at the end of the Appendix. The only complaint that the reader may have with this book is its dependence on German standards. Most readers will be more familiar with the ISO, ANSI, or ASTM E-11 standards than the DIN standards for performing capability studies for machines and processes. The QS-9000 standard is brie y mentioned, but the authors could better serve their readers by offering a set of comparable (or equivalent) international or U.S. standards that may be more accessible to them. For example, ASTM and ASQ (American Society for Quality) offer standards that can be obtained via their Internet Web sites. In conclusion, I found this book to be a comprehensive discussion of methods that can be used to perform both machine and process capability studies. The approach of extending these ideas to the concept of machine and process qualiŽ cation is intuitive, and readers, particularly statisticians, should Ž nd this book to be a worthwhile addition to their libraries.

Proceedings Article
04 Jan 2001
TL;DR: A novel process for the production of the valuable perfume material norpatchoulenol is disclosed which involves oxidatively decarboxylating an acid precursor according to the following reaction scheme.
Abstract: A novel process for the production of the valuable perfume material norpatchoulenol is disclosed which involves oxidatively decarboxylating an acid precursor according to the following reaction scheme: II I+TR Intermediates for the synthesis, having the formula VI wherein R represents -COOH, CH2OH or -CHO are also disclosed.

Book ChapterDOI
01 Jan 2001
TL;DR: In this paper, a new bias correction method for the DIMTEST procedure based on the statistical principle of resampling is introduced. But this method is limited to the case of multidimensionality.
Abstract: Following in the nonparametric item response theory tradition, DIMTEST (Stout, 1987) is an asymptotically justified nonparametric procedure that provides a test of hypothesis of unidimensionality of a test data set This chapter introduces a new bias correction method for the DIMTEST procedure based on the statistical principle of resampling A simulation study shows this new version of DIMTEST has a Type I error rate close to the nominal rate of α = 005 in most cases and very high power to detect multidimensionality in a variety of realistic multidimensional models The result with this new bias correction method is an improved DIMTEST procedure with much wider applicability and good statistical performance

Journal ArticleDOI
TL;DR: It is found that exchange rates do appear to contain information that is exploitable for enhanced point prediction, but the nature of the predictive relations evolves through time.
Abstract: We propose tests for individual and joint irrelevance of network inputs. Such tests can be used to determine whether an input or group of inputs "belong" in a particular model, thus permitting valid statistical inference based on estimated feedforward neural-network models. The approaches employ well-known statistical resampling techniques. We conduct a small Monte Carlo experiment showing that our tests have reasonable level and power behavior, and we apply our methods to examine whether there are predictable regularities in foreign exchange rates. We find that exchange rates do appear to contain information that is exploitable for enhanced point prediction, but the nature of the predictive relations evolves through time.

Book ChapterDOI
01 Jan 2001
TL;DR: In standard sequential imputation, repeated resampling stages progressively impoverish the set of particles, by decreasing the number of distinct values represented in that set, and a possible remedy is Rao-Blackwellisation (Liu and Chen 1998).
Abstract: In standard sequential imputation, repeated resampling stages progressively impoverish the set of particles, by decreasing the number of distinct values represented in that set A possible remedy is Rao-Blackwellisation (Liu and Chen 1998) Another remedy, which we discuss in this chapter, is to adopt a hybrid particle filter, which combines importance sampling/resampling (Rubin 1988, Smith and Gelfand 1992) and Markov chain iterations An example of this class of particle filters is the RESAMPLEMOVE algorithm described in (Gilks and Berzuini 1999), in which the swarm of particles is adapted to an evolving target distribution by periodical resampling steps and through occasional Markov chain moves that lead each individual particle from its current position to a new point of the parameter space These moves increase particle diversity Markov chain moves had previously been introduced in particle filters (for example, (Berzuini, Best, Gilks and Larizza 1997, Liu and Chen 1998)), but rarely with the possibility of moving particles at any stage of the evolution process along any direction of the parameter space; this is, indeed, an important and innovative feature of RESAMPLE—MOVE This allows, in particular, to prevent particle depletion along directions of the parameter space corresponding to static parameters, for example when the model contains unknown hyper-parameters, a situation which is not addressed by the usual state filtering algorithms

Journal ArticleDOI
TL;DR: This paper shows how to improve the quality of a given polygonal mesh model by resampling its feature and blend regions within an interactive framework and demonstrates sophisticated modeling operations that can be implemented based on this resamplings technique.
Abstract: Efficient surface reconstruction and reverse engineering techniques are usually based on a polygonal mesh representation of the geometry: the resulting models emerge from piecewise linear interpolation of a set of sample points. The quality of the reconstruction not only depends on the number and density of the sample points but also on their alignment to sharp and rounded features of the original geometry. Bad alignment can lead to severe alias artifacts. In this paper we present a sampling pattern for feature and blend regions which minimizes these alias errors. We show how to improve the quality of a given polygonal mesh model by resampling its feature and blend regions within an interactive framework. We further demonstrate sophisticated modeling operations that can be implemented based on this resampling technique.


Journal ArticleDOI
TL;DR: In this paper, a sieve bootstrap procedure based on residual resampling from autoregressive approximations is used to test the unit root hypothesis in models that may include a linear rend and/or an intercept.
Abstract: This paper examines bootstrap tests of the null hypothesis of an autoregressive unit root in models that may include a linear rend and/or an intercept and which are driven by innovations that belong to the class of stationary and invertible linear processes. Our approach makes use of a sieve bootstrap procedure based on residual resampling from autoregressive approximations, the order of which increases with the sample size at a suitable rate. We show that the sieve bootstrap provides asymptotically valid tests of the unit-root hypothesis and demonstrate the small-sample effectiveness of the method by means of simulation.

Journal ArticleDOI
Hisashi Tanizaki1
TL;DR: In this paper, the joint density of the state variables is used for nonlinear non-Gaussian filtering and smoothing, where the sampling techniques such as rejection sampling (RS), importance resampling (IR), and the Metropolis- Hastings independence sampling (MH) are utilized.
Abstract: In this paper, the nonlinear non-Gaussian filters and smoothers are proposed using the joint density of the state variables, where the sampling techniques such as rejection sampling (RS), importance resampling (IR) and the Metropolis- Hastings independence sampling (MH) are utilized. Utilizing the random draws generated from the joint density, the density-based recursive algorithms on filtering and smoothing can be obtained. Furthermore, taking into account possibility of structural changes and outliers during the estimation period, the appropriately chosen sampling density is possibly introduced into the suggested nonlinear non-Gaussian filtering and smoothing procedures. Finally, through Monte Carlo simulation studies, the suggested filters and smoothers are examined.

Journal ArticleDOI
TL;DR: In this article, a resampling version of the Gupta procedure is used to obtain a set of good models, which are not significantly worse than the maximum likelihood model; i.e., a confidence set of models.
Abstract: We consider multiple comparisons of log-likelihood's to take account of the multiplicity of testings in selection of nonnested models. A resampling version of the Gupta procedure for the selection problem is used to obtain a set of good models, which are not significantly worse than the maximum likelihood model; i.e., a confidence set of models. Our method is to test which model is better than the other, while the object of the classical testing methods is to find the correct model. Thus the null hypotheses behind these two approaches are very different. Our method and the other commonly used approaches, such as the approximate Bayesian posterior, the bootstrap selection probability, and the LR test against the full model, are applied to the selection of molecular phylogenetic tree of mammal species. Tree selection is a version of the model-based clustering, which is an example of nonnested model selection. It is shown that the structure of the tree selection problem is equivalent to that of the variable ...

Journal ArticleDOI
TL;DR: A statistic based on a ratio of quadratic forms is proposed and the test is constructed by investigating the distributional properties of this statistic under the assumption of an independent Gaussian process.
Abstract: The variogram is a standard tool in the analysis of spatial data, and its shape provides useful information on the form of spatial correlation that may be present. However, it is also useful to be able to assess the evidence for the presence of any spatial correlation. A method of doing this, based on an assessment of whether the true function underlying the variogram is constant, is proposed. Nonparametric smoothing of the squared differences of the observed variables, on a suitably transformed scale, is used to estimate variogram shape. A statistic based on a ratio of quadratic forms is proposed and the test is constructed by investigating the distributional properties of this statistic under the assumption of an independent Gaussian process. The power of the test is investigated. Reference bands are proposed as a graphical follow-up. An example is discussed.