scispace - formally typeset
Journal ArticleDOI

Computer-intensive methods in statistical analysis

TLDR
This article provides a readable, self-contained introduction to the bootstrap and jackknife methodology for statistical inference; in particular, the focus is on the derivation of confidence intervals in general situations.
Abstract
As far back as the late 1970s, the impact of affordable, high-speed computers on the theory and practice of modern statistics was recognized by Efron (1979, 1982). As a result, the bootstrap and other computer-intensive statistical methods (such as subsampling and the jackknife) have been developed extensively since that time and now constitute very powerful (and intuitive) tools to do statistics with. This article provides a readable, self-contained introduction to the bootstrap and jackknife methodology for statistical inference; in particular, the focus is on the derivation of confidence intervals in general situations. A guide to the available bibliography on bootstrap methods is also offered.

read more

Citations
More filters
Book

Guide to Biometrics

TL;DR: This complete, technical guide details the principles, methods, technologies, and core ideas used in biometric authentication systems and defines and explains how to measure the performance of both verification and identification systems.

Recursive Bayesian Estimation : Navigation and Tracking Applications

TL;DR: This thesis phrases the application of terrain navigation in the Bayesian framework, and develops a numerical approximation to the optimal but intractable recursive solution, and derives explicit expressions for the Cramer-Rao bound of general nonlinear filtering, smoothing and prediction problems.
Journal ArticleDOI

The bootstrap and its application in signal processing

TL;DR: The motivations for using the bootstrap in typical signal processing applications are highlighted, and the use of the boot strap for constructing confidence intervals for flight parameters in a passive acoustic emission problem is demonstrated.
Journal ArticleDOI

Spectro-temporal response field characterization with dynamic ripples in ferret primary auditory cortex.

TL;DR: It is shown that for most neurons, the lack of full separability stems from differences between the upward and downward spectral cross-section but not from the temporal cross-sections; this places strong constraints on the neural inputs of these AI units.
Journal ArticleDOI

Brief Subspace-based fault detection algorithms for vibration monitoring

TL;DR: New fault detection algorithms are described and analyzed, based on recent stochastic subspace-based identification methods and the statistical local approach to the design of detection algorithms.
References
More filters
Journal ArticleDOI

Bootstrap Methods: Another Look at the Jackknife

TL;DR: In this article, the authors discuss the problem of estimating the sampling distribution of a pre-specified random variable R(X, F) on the basis of the observed data x.
Book

The jackknife, the bootstrap, and other resampling plans

Bradley Efron
TL;DR: The Delta Method and the Influence Function Cross-Validation, Jackknife and Bootstrap Balanced Repeated Replication (half-sampling) Random Subsampling Nonparametric Confidence Intervals as mentioned in this paper.
Journal ArticleDOI

Bootstrap Methods for Standard Errors, Confidence Intervals, and Other Measures of Statistical Accuracy

TL;DR: The bootstrap is extended to other measures of statistical accuracy such as bias and prediction error, and to complicated data structures such as time series, censored data, and regression models.
Journal Article

Spectral Analysis and Time Series

TL;DR: In this article, the authors introduce the concept of Stationary Random Processes and Spectral Analysis in the Time Domain and Frequency Domain, and present an analysis of Processes with Mixed Spectra.
Journal ArticleDOI

A Leisurely Look at the Bootstrap, the Jackknife, and Cross-Validation

TL;DR: This paper reviewed the nonparametric estimation of statistical error, mainly the bias and standard error of an estimator, or the error rate of a prediction rule, at a relaxed mathematical level, omitting most proofs, regularity conditions and technical details.