scispace - formally typeset
Search or ask a question

Showing papers on "Resampling published in 1989"


Journal ArticleDOI
TL;DR: A series of statistical tests for hypotheses of morphological integration and for interspecific comparison are presented, along with examples of their application.
Abstract: -Although comparisons of variation patterns with theoretical expectations and across species are playing an increasingly important role in systematics, there has been a lack of appropriate procedures for statistically testing the proposed hypotheses. We present a series of statistical tests for hypotheses of morphological integration and for interspecific comparison, along with examples of their application. These tests are based on various randomization and resampling procedures, such as Mantel's test with its recent extensions and bootstrapping. They have the advantage of avoiding the specific and strict distributional assumptions invoked by analytically-based statistics. The statistical procedures described include one for testing the fit of observed correlation matrices to hypotheses of morphological integration and a related test for significant differences in the fit of two alternative hypotheses of morphological integration to the observed correlation structure. Tests for significant similarity in the patterns and magnitudes of variance and correlation among species are also provided. [Morphometrics; comparative analysis; morphological integration; quadratic assignment procedures; Mantel's test; bootstrap.] Comparing observed patterns of morphometric variation to theories of morphological integration (Olson and Miller, 1958; Cheverud, 1982) and among species, or subspecific populations (Arnold, 1981; Riska, 1985), has been a largely ad hoc procedure. Previously, a large body of methods has been used to analyze variation patterns, including various forms of cluster analysis, factor analysis, principal components, multi-dimensional scaling, matrix correlations, and visual inspection. The results of such analyses were then discussed relative to some theory of variation patterns or compared between species or populations. These comparisons might either be verbal or quantitative, but tests of statistical significance were rarely employed. More recently, there has been an increase in statistical rigor in the field, particularly involving the use of quadratic assignment procedures (QAP; sometimes referred to as Mantel's test) (Mantel, 1967; Deitz, 1983; Dow and Cheverud, 1985; Smouse et al., 1986; Dow et al., 1987a, b; Hubert, 1987) for testing the statistical significance of matrix comparisons (Cheverud and Leamy, 1985; Lofsvold, 1986; Kohn and Atchley, 1988; Cheverud, 1989a; Wagner, 1989) and the use of confirmatory factor analysis (Zelditch, 1987, 1988) for testing hypotheses concerning levels and patterns of variation. These new methods allow statistical inference for hypotheses of morphological integration and for comparisons across species. We will describe the use of several of these newer methods, especially those using randomization, for testing hypotheses of morphological integration and interspecific comparison and provide brief examples of their use. The procedures described below can be used to rigorously test hypotheses concerning the causes of morphological variation and covariation patterns. A closely related set of procedures can be directed towards comparative, cross-taxon, analyses of variation and correlation patterns. The systematic study of distinction among group means is well known and extensively represented in the literature. However, systematic studies of variation patterns (as measured by a multivariate variance/covariance or correlation matrix) have been relatively rare. This has been due, in part, to a lack of relevant theory and appropriate systematic methodology. Important theoretical advances over the

281 citations


Journal ArticleDOI
TL;DR: In this paper, a few alternate forms of the socalled bootstrap and jackknife resampling procedures are tested using a concocted data set with a Gaussian parent distribution, with the result that the jackknife is the most efficient procedure to apply, although its confidence bounds are slightly overestimated.

270 citations


Journal ArticleDOI
TL;DR: It is recommended that biologists use some resampling procedure to evaluate wildlife habitat models prior to field evaluation and illustrate these methods (cross-validation, jackknife resampled, and bootstrap resamplings) with computer simulation to demonstrate the increase in precision of the estimate.
Abstract: Predictive models of wildlife-habitat relationships often have been developed without being tested The apparent classification accuracy of such models can be optimistically biased and misleading. Data resampling methods exist that yield a more realistic estimate of model classification accuracy These methods are simple and require no new sample data. We illustrate these methods (cross-validation, jackknife resampling, and bootstrap resampling) with computer simulation to demonstrate the increase in precision of the estimate. The bootstrap method is then applied to field data as a technique for model comparison We recommend that biologists use some resampling procedure to evaluate wildlife habitat models prior to field evaluation.

195 citations


Journal ArticleDOI
TL;DR: In this paper, an alternative resampling plan, based on the bootstrap, is proposed in an attempt to estimate mean integrated squared error, which leads to a further data-based choice of smoothing parameter.
Abstract: SUMMARY Cross-validation based on integrated squared error has already been applied to the choice of smoothing parameter in the kernel method of density estimation. In this paper, an alternative resampling plan, based on the bootstrap, is proposed in an attempt to estimate mean integrated squared error. This leads to a further data-based choice of smoothing parameter. The two methods are compared and some simulations and examples demonstrate the relative merits. For large samples, the bootstrap performs better than cross-validation for many distributions.

141 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the use of bootstrap methods for the problem of testing homogeneity of variances when means are not assumed equal or known, and show that the new resampling procedures compare favorably with older methods in terms of test validity and power.
Abstract: This article describes the use of bootstrap methods for the problem of testing homogeneity of variances when means are not assumed equal or known. The methods are new in this context and allow the use of normal-theory test statistics such as F = s 2 1/s 2 2 without the normality assumption that is crucial for validity of critical values obtained from the F distribution. Both asymptotic analysis and Monte Carlo sampling show that the new resampling procedures compare favorably with older methods in terms of test validity and power.

110 citations


Journal ArticleDOI
TL;DR: Methods for estimating the regional variance in emission tomography images which arise from the Poisson nature of the raw data are discussed, based on the bootstrap and jackknife methods of statistical resampling theory.
Abstract: Methods for estimating the regional variance in emission tomography images which arise from the Poisson nature of the raw data are discussed. The methods are based on the bootstrap and jackknife methods of statistical resampling theory. The bootstrap is implemented in time-of-flight PET (positron emission tomography); the same techniques can be applied to non-time-of-flight PET and SPECT (single-photon-emission computed tomography). The estimates are validated by comparing them to those obtained by repetition of emission scans, using data from a time-of-flight positron emission tomograph. Simple expressions for the accuracy of the estimates are given. The present approach is computationally feasible and can be applied to any reconstruction technique as long as the data are acquired in a raw, uncorrected form. >

81 citations


Journal ArticleDOI
TL;DR: In this paper, the authors discuss the importance of sample sampling in statistical information and likelihood analysis, and present a survey sampling model based on the assumption that the sample size is equal to the probability of a given point function or a measure.
Abstract: I Information and Likelihood.- I Recovery of Ancillary Information.- 0. Notes.- 1. Introduction.- 2. The Sample Size Analogy.- 3. A Logical Difficulty.- 4. Conceptual Statistical Experiments.- II Statistical Information and Likelihood Part I: Principles.- 0. Notes.- 1. Statistical Information.- 2. Basic Definitions and Relations.- 3. Some Principles of Inference.- 4. Information as a Function.- 5. Fisher Information.- 6. The Likelihood Principle.- III Statistical Information and Likelihood Part II: Methods.- 0. Notes.- 1. Non-Bayesian Likelihood Methods.- 2. Likelihood: Point Function or a Measure ?.- 3. Maximum Likelihood.- IV Statistical Information and Likelihood Part III: Paradoxes.- 0. Notes.- 1. A Fallacy of Five Terms.- 2. The Stopping Rule Paradox.- 3. The Stein Paradox.- V Statistical Information and Likelihood: Discussions.- 0. Notes.- 1. Discussions.- 2. Barnard-Basu Correspondence.- VI Partial Sufficiency.- 0. Notes.- 1. Introduction.- 2. Specific Sufficient Statistics.- 3. Partial Sufficiency.- 4. H-sufficiency.- 5. Invariantly Sufficient Statistics.- 6. Final Remarks.- VII Elimination of Nuisance Parameters.- 0. Notes.- 1. The Elimination Problem and Methods.- 2. Marginalization and Conditioning.- 3. Partial Sufficiency and Partial Ancillarity.- 4. Generalized Sufficiency and Conditionality Principles.- 5. A Choice Dilemma.- 6. A Conflict.- 7. Rao-Blackwell Type Theorems.- 8. The Bayesian Way.- 9. Unrelated Parameters.- VIII Sufficiency and Invariance.- 0. Notes.- 1. Summary.- 2. Definitions and Preliminaries.- 3. A Mathematical Introduction.- 4. Statistical Motivation.- 5. When a Boundedly Complete Sufficient Sub-field Exists.- 6. The Dominated Case.- 7. Examples.- 8. Transformations of a Set of Normal Variables.- 9. Parameter-preserving Transformations.- 10. Some Typical Invariance Reductions.- 11. Some Final Remarks.- IX Ancillary Statistics, Pivotal Quantities and Confidence.- Statements.- 1. Introduction.- 2. Ancillary Statistics.- 3. Ancillary Information.- 4. Pivotal Quantities.- 5. Confidence Statements.- 6. Ancillarity in Survey Sampling.- II Survey Sampling and Randomization.- X Sufficiency in Survey Sampling.- 1. Introduction and Summary.- 2. Sufficient Statistics and Sub-Fields.- 3. Pitcher and Burkholder Pathologies.- 4. Sufficiency in Typical Sampling Situations.- XI Likelihood Principle and Survey Sampling.- 0. Notes.- 1. Introduction.- 2. Statistical Models and Sufficiency.- 3. Sufficiency in Discrete Models.- 4. The Sample Survey Models.- 5. The Sufficiency and Likelihood Principles.- 6. Role and Choice of the Sampling Plan.- 7. Concluding Remarks.- XII On the Logical Foundations of Survey Sampling.- 1. An Idealization of the Survey Set-up.- 2. Probability in Survey Theory.- 3. Non-sequential Sampling Plans and Unbiased Estimation.- 4. The Label-set and the Sample Core.- 5. Linear Estimation in Survey Sampling.- 6. Homogeneity, Necessary Bestness and Hyper-Admissibility.- 7. Linear Invariance.- XIII On the Logical Foundations of Survey Sampling: Discussions.- 1. Discussions.- 2. Author's Reply.- XIV Relevance of Randomization in Data Analysis.- 0. Notes.- 1. Introduction.- 2. Likelihood.- 3. A Survey Sampling Model.- 4. Why Randomize?.- 5. Randomization Analysis of Data.- 6. Randomization and Information.- 7. Information in Data.- 8. A Critical Review.- XV The Fisher Randomization Test.- 0. Notes.- 1. Introduction.- 2. Randomization.- 3. Two Fisher Principles.- 4. The Fisher Randomization Test.- 5. Did Fisher Change His Mind?.- 6. Randomization and Paired Comparisons.- 7. Concluding Remarks.- XVI The Fisher Randomization Test: Discussions.- 1. Discussions.- 2. Rejoinder.- III Miscellaneous Notes and Discussions.- XVII Likelihood and Partial Likelihood.- XVIII A Discussion on the Fisher Exact Test.- XIX A Discussion on Survey Theory.- XX A Note on Unbiased Estimation.- XXI The Concept of Asymptotic Efficiency.- XXII Statistics Independent of a Complete Sufficient Statistic.- XXIII Statistics Independent of a Sufficient Statistic.- XXIV The Basu Theorems.- References.

65 citations


Journal ArticleDOI
TL;DR: In this article, an antithetic variates method for the bootstrap is proposed and discussed, which is applicable quite generally, to bias estimation, distribution function estimation and quantile estimation, for example.
Abstract: SUMMARY An antithetic variates method for the bootstrap is proposed and discussed. It is applicable quite generally, to bias estimation, distribution function estimation and quantile estimation, for example. It is based on an 'antithetic permutation' of the sample, which amounts to ranking values of a certain function of the data. Once this has been done, B uniform resampling operations may be immediately converted into 2B 'effective' resampling operations, yielding greater statistical efficiency than 2B totally independent resampling operations. We show that antithetic resampling leads to positive nonnegligible gains in performance, for the same level of labour, when compared with ordinary uniform resampling.

35 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used residuals in the two-sample problem to test the homogeneity of scale in the presence of nuisance location parameters and obtained new asymptotic results for U-statistics with estimated parameters.

34 citations


Journal ArticleDOI
TL;DR: In this paper, the power of the approximate permutation test and the t test were compared for the two-sample location problem under a shift model, and it was shown that the T test has better power under non-normality than the APT.
Abstract: Simulations were performed to compare the power of the approximate permutation test with the power of t test and Wilcoxon's test for the two-sample location problem under a shift model. The approximate permutation test is sometimes suggested as a panacea for non-normality. However, for the distributions and sample sizes used in this study, the power of the approximate permutation test and the t test are nearly equal. Under non-normality Wilcoxon does have better power characteristics than the other tests. So it can be concluded, that in this study Wilcoxon, a permutation test on ranks, does perform better under non-normality than the approximate permutation test that uses the measurements themselves.

33 citations


Journal ArticleDOI
TL;DR: A statistical test to compare the recurrence or tumor rates in two treatment groups, using the randomization distribution, is described, and confidence intervals for the rate ratio are determined from the bootstrap distribution.

Journal ArticleDOI
TL;DR: In this paper, the asymptotic performance of the bootstrap in linear regression models is studied, and it is shown that the performance is at least as good as, and in some cases better than, the classical normal approximation.
Abstract: The asymptotic performance of the bootstrap in linear regression models is studied. Edgeworth expansions show that asymptotically, the bootstrap is always at least as good as, and in some cases better than, the classical normal approximation. The performances of both the bootstrap and the normal approximation depend on the rate of increase in the elements of the design matrix.

Journal ArticleDOI
TL;DR: In this article, a computer-intensive bootstrap (resampling) approach was developed to estimate sampling effects on solutions from nonlinear ordination, and the resulting patterns of local and global instability in detrended correspondence analysis (DCA) solutions were examined.
Abstract: Indirect gradient analysis, or ordination, is primarily a method of exploratory data analysis. However, to support biological interpretations of resulting axes as vegetation gradients, or later confirmatory analyses and statistical tests, these axes need to be stable or at least robust into minor sampling effects. We develop a computer-intensive bootstrap (resampling) approach to estimate sampling effects on solutions from nonlinear ordination. We apply this approach to simulated data and to three forest data sets from North Carolina, USA and examine the resulting patterns of local and global instability in detrended correspondence analysis (DCA) solutions. We propose a bootstrap coefficient, scaled rank variance (SRV), to estimate remaining instability in species ranks after rotating axes to a common global orientation. In analysis of simulated data, bootstrap SRV was generally consistent with an equivalent estimate from repeated sampling. In an example using field data SRV, bootstrapped DCA showed good recovery of the order of common species along the first two axes, but poor recovery of later axes. We also suggest some criteria to use with the SRV to decide how many axes to retain and attempt to interpret.

Journal ArticleDOI
TL;DR: In this article, the authors propose a double bootstrap method which is based on certain working models and involves two levels of resampling, which can be used to estimate the variance of any statistic.
Abstract: Variance estimation under systematic sampling with probability proportional to size is known to be a difficult problem. We attempt to tackle this problem by the bootstrap resampling method. It is shown that the usual way to bootstrap fails to give satisfactory variance estimates. As a remedy, we propose a double bootstrap method which is based on certain working models and involves two levels of resampling. Unlike existing methods which deal exclusively with the Horvitz–Thompson estimator, the double bootstrap method can be used to estimate the variance of any statistic. We illustrate this within the context of both mean and median estimation. Empirical results based on five natural populations are encouraging.

Journal ArticleDOI
TL;DR: In this paper, the authors compared the performance of bootstrap and approximate randomization tests with the parametric Pearson's r under composite-normal conditions in which the test of significance of Pearson's R is known to possess overly liberal Type I error rates.
Abstract: Computer-intensive statistical techniques have been suggested as alternatives to standard parametric analysis due to their freedom from normal-theory assumptions. Two such techniques that may be used for correlational analysis are bootstrap and approximate randomization tests. These techniques were compared with the parametric Pearson's r under composite-normal conditions in which the test of significance of Pearson's r is known to possess overly liberal Type I error rates. Results indicated that the approximate randomization test had Type I error rates that closely followed the parametric approach. The bootstrap, however, showed good control of the Type I error rates, except on small sample sizes.


Journal ArticleDOI
TL;DR: This work illustrates an alternative method of data handling, involving calculation of similarity between individuals, which is more biologically reasonable and which eliminates problems with published values of niche overlap (similarity).
Abstract: Published values of niche overlap (similarity) are generally point estimates of the similarity between population centroids. A number of shortcomings are associated with this method of data presentation: (i) confidence intervals on the estimates are lacking, (ii) no statistical procedures are used to test for significant differences between estimates, and (iii) the estimates tend to be biased. These problems arise primarily as a result of the manner in which data are pooled for the calculations. We illustrate an alternative method of data handling, involving calculation of similarity between individuals, which is more biologically reasonable and which eliminates these problems. A permutation test procedure is also introduced for use on large sets of data.

Journal ArticleDOI
TL;DR: The authors explored a resampling procedure for obtaining estimates of variance components with small samples, nonnormal distributions, and unbalanced designs, and the results suggest that the proposed method is useful when most needed.
Abstract: The accurate estimation of variance components is essential for studying the reliability of a measurement procedure in generalizability theory. Previous research has shown that errors in estimation of variance components lead to erroneous interpretations. This is particularly true with small samples, nonnormal data, and unbalanced designs. This study explores a resampling procedure for obtaining estimates of variance components. The results suggest that the proposed method is useful when most needed—with small samples, nonnormal distributions, and unbalanced designs.

Journal ArticleDOI
TL;DR: In this article, a unidimensional model is derived which can provide the probability that the same two stimuli will be perceived to be most alike and most different in Richardson's method of triads under the assumption that resampling occurs within a trial.
Abstract: A unidimensional model is derived which can provide the probability that the same two stimuli will be perceived to be most alike and most different in Richardson's method of triads under the assumption that resampling occurs within a trial. This probability is shown to depend on the extent to which the stimulus distributions overlap and their relative locations on a unidimensional continuum. Recommendations on how to estimate this probability experimentally are given.

Journal ArticleDOI
TL;DR: In this paper, a modification of the Iterative Least-Squares method (ILS) developed by Schmee and Hahn (1979) and the Maximum-Likelihood Estimator (MLE) was proposed for the network m b estimation problem.
Abstract: We briefly discuss the similarities and differences between two iterative estimators that are suitable for the network m b estimation problem, namely a modification of the Iterative Least-Squares method (ILS) developed by Schmee and Hahn (1979) and the Maximum-Likelihood Estimator (MLE). Both methods reduce to the usual Least-Squares Multiple Factors (LSMF) method when the censored data are deleted from the network observational data. For censored case, the standard deviation ( σ ) of the obscuring noise has to be solved through iteration along with the event magnitudes and the station corrections. An extra constraint on σ is necessary to determine which optimal estimation scheme is of interest. The final value of σ for each iterative scheme can be used as a good approximation to the unbiased estimate of the standard deviation of the perturbing noise. By scaling this σ value by the square root of the number of observations associated with each unknown parameter, the uncertainty in each estimated parameter can be approximated efficiently. These error estimates seem to differ from the unbiased standard errors only by a common multiplying constant across all stations and all event m b s. The bootstrap method is reviewed and adapted to the case of multivariate estimation with doubly censored data. The Monte Carlo resampling is carried out among the collection of residuals instead of the observational data. The pool of residuals is enlarged to include all censored residuals for random drawing. The bootstrap result confirms the aforementioned scaling relationship between the individual error estimates and the global σ of the perturbing noise. As a result, the bootstrap/jackknife technique might not be really worth the considerable computational effort they require on this specific application.

Journal ArticleDOI
TL;DR: The randomization test is designed to test the assumption, implicit in most cross-correlation algorithms, that both sequences comprise observations, at unknown but ordered times, from the same underlying yet unknown function of time.
Abstract: Several algorithms, based on dynamic programming techniques, for comparing, matching, or cross-correlating two ordered sequences of observations now exist. All such algorithms produce one or more “optimal” matchings, no matter what data are used. This paper presents an empirical method, based on a randomization test, for assessing how well the given sequences are matched or slotted together. The randomization test is designed to test the assumption, implicit in most cross-correlation algorithms, that both sequences comprise observations, at unknown but ordered times, from the same underlying yet unknown function of time. The test is intuitively appealing, easy to implement, works well on both artificial and real data, and requires no complicated parametric modelling.

Proceedings ArticleDOI
23 May 1989
TL;DR: Two implementation techniques for building a high-performance image-resampler VLSI chip are considered, including a modified two-pass resampling scheme that can provide a throughput of one pixel in a clock period smaller than that for an adder.
Abstract: The authors consider two implementation techniques for building a high-performance image-resampler VLSI chip. First, a two-level pipelined systolic array is designed for image resampling to give high parallelism in computation and high feasibility for VLSI implementation. Second, a modified two-pass resampling scheme is used to decrease the amount of required storage and increase the concurrency between two resampling passes. With the two techniques, the system can provide a throughput of one pixel in a clock period smaller than that for an adder. >

Journal ArticleDOI
TL;DR: This work computed the bootstrapped confidence intervals by three different methods and compared these intervals to one based on the asymptotic standard error and to a likelihood-based interval, which did not perform well and underestimated the true coverage in most cases.
Abstract: A linear relative risk form for the Cox model is sometimes more appropriate than the usual exponential form. The usual asymptotic confidence interval may not have the appropriate coverage, however, due to flatness of the likelihood in the neighbourhood of β. For a single continuous covariate, we derive bootstrapped confidence intervals with use of two resampling methods. The first resamples the original data and yields both one-step and fully iterated estimates of β. The second resamples the score and information quantities at each failure time to yield a one-step estimate. We computed the bootstrapped confidence intervals by three different methods and compared these intervals to one based on the asymptotic standard error and to a likelihood-based interval. The bootstrapped intervals did not perform well and underestimated the true coverage in most cases.



Book ChapterDOI
01 Jan 1989
TL;DR: In this paper, a review of the use of Edgeworth expansions in bootstrap and other related resampling procedures is presented, showing that the bootstrap approximation is better in some cases than the classical normal approximation.
Abstract: The bootstrap and other resampling plans have gained popularity among applied scientists in the past few years. In general, the standard bootstrap method is known to give as good an approximation as the normal theory for the sampling distribution of a statistic. The main emphasis of the review is on the results which show that the bootstrap approximation is better in some cases than the classical normal approximation. This was first observed in the case of sample mean in 1981 by Kesar Singh. The result has been extended by him and the present author to a wide class of statistics. These results mainly use the Edgeworth expansions. This article reviews recent results on the use of Edgeworth expansions in bootstrap and other related resampling procedures.

Journal ArticleDOI
TL;DR: In this paper, a geometrically consistent procedure based on the Euclidean distance is proposed, which involves the least absolute deviation (LAD) regression and a new permutation test for matched pairs (PTMP).
Abstract: The effects of outliers on linear regression are examined. The sensitivity of classical least‐squares (LS) procedures to outliers is shown to be associated with the geometric inconsistency between the data space and the analysis space. This is illustrated for both estimation and inference. A geometrically consistent procedure based on the Euclidean distance is proposed. This procedure involves the least absolute deviation (LAD) regression and a new permutation test for matched pairs (PTMP). Comparisons made with LS techniques demonstrate that the proposed procedure is more resistant to the existence of outliers in the data set and leads to more intuitive results. Applications and illustrations using meteorological and climatological data are also discussed.

Book ChapterDOI
01 Jan 1989
TL;DR: In this paper, the authors used a combination of two resampling techniques, the jackknife and the bootstrap, to estimate the number of gypsy moth populations in a forest canopy.
Abstract: Estimates of gypsy moth populations can be obtained using a ratio of mean frass drop from a forest canopy to mean frass production for individually caged larvae. Appropriate statistical methods for point estimation and confidence intervals have been developed. Those methods included the use of two resampling techniques, the jackknife and the bootstrap. Exact theoretical comparisons of the proposed methods are essentially impossible. This paper evaluates the techniques via computer simulations for a limited number of situations.


Patent
18 Jan 1989
TL;DR: In this article, interpolation processing utilizing line correlation based on a synthesis signal of the existing field and a 2-preceding field in total two fields and an output delay signal of a delay circuit retarding the synthesis signal by a prescribed horizontal scanning period was applied to attain high resolution to a moving picture and to make beat disturbance not distinct.
Abstract: PURPOSE:To attain high resolution to a moving picture and to make beat disturbance not distinct by applying interpolation processing utilizing line correlation based on a synthesis signal of the existing field and a 2-preceding field in total two fields and an output delay signal of a delay circuit retarding the synthesis signal by a prescribed horizontal scanning period. CONSTITUTION:The 1st synthesis resampling signal comprising a high frequency component of a retarded resampling signal obtained from a resampling signal delayed by 1H at a 1H delay circuit 31 and a low frequency component of an input resampling signal, and the 2nd synthesis resampling signal synthesized from the input resampling signals alternately in time series for 1/(2fc1) are extracted from an interpolation processing circuit 30 and inputted to a terminal 33b of a switch circuit 33. The switch circuit 33 is subject to switching control by an output detection signal of a moving detection circuit 32, the input synthesis resampling signal is outputted selectively from the terminal 33b at the detection of a moving picture and the input resampling signal is outputted selectively from a terminal 33a at the detection of a still picture.