scispace - formally typeset
Search or ask a question
Author

Joseph P. Romano

Bio: Joseph P. Romano is an academic researcher from Stanford University. The author has contributed to research in topics: Multiple comparisons problem & Estimator. The author has an hindex of 50, co-authored 139 publications receiving 11484 citations. Previous affiliations of Joseph P. Romano include University of California, Berkeley & University of California, San Diego.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the stationary bootstrap technique was introduced to calculate standard errors of estimators and construct confidence regions for parameters based on weakly dependent stationary observations, where m is fixed.
Abstract: This article introduces a resampling procedure called the stationary bootstrap as a means of calculating standard errors of estimators and constructing confidence regions for parameters based on weakly dependent stationary observations. Previously, a technique based on resampling blocks of consecutive observations was introduced to construct confidence intervals for a parameter of the m-dimensional joint distribution of m consecutive observations, where m is fixed. This procedure has been generalized by constructing a “blocks of blocks” resampling scheme that yields asymptotically valid procedures even for a multivariate parameter of the whole (i.e., infinite-dimensional) joint distribution of the stationary sequence of observations. These methods share the construction of resampling blocks of observations to form a pseudo-time series, so that the statistic of interest may be recalculated based on the resampled data set. But in the context of applying this method to stationary data, it is natural...

2,418 citations

Journal ArticleDOI
TL;DR: In this paper, Wu et al. studied the problem of constructing confidence regions by approximating the sampling distribution of some statistic, where the true sampling distribution is estimated by an appropriate normalization of the values of the statistic computed over subsamples of the data.
Abstract: In this article, the construction of confidence regions by approximating the sampling distribution of some statistic is studied. The true sampling distribution is estimated by an appropriate normalization of the values of the statistic computed over subsamples of the data. In the i.i.d. context, the method has been studied by Wu in regular situations where the statistic is asymptotically normal. The goal of the present work is to prove the method yields asymptotically valid confidence regions under minimal conditions. Essentially, all that is required is that the statistic, suitably normalized, possesses a limit distribution under the true model. Unlike the bootstrap, the convergence to the limit distribution need not be uniform in any sense. The method is readily adapted to parameters of stationary time series or, more generally, homogeneous random fields. For example, an immediate application is the construction of a confidence interval for the spectral density function of a homogeneous random field.

756 citations

Journal ArticleDOI
TL;DR: In this paper, a stepwise multiple testing procedure is proposed to asymptotically control the familywise error rate at a desired level, which implicitly captures the joint dependence structure of the test statistics, which results in increased ability to detect alternative hypotheses.
Abstract: It is common in econometric applications that several hypothesis tests are carried out at the same time. The problem then becomes how to decide which hypotheses to reject, accounting for the multitude of tests. In this paper, we suggest a stepwise multiple testing procedure which asymptotically controls the familywise error rate at a desired level. Compared to related single-step methods, our procedure is more powerful in the sense that it often will reject more false hypotheses. Unlike some stepwise methods, our method implicitly captures the joint dependence structure of the test statistics, which results in increased ability to detect alternative hypotheses. We prove our method asymptotically controls the familywise error rate under minimal assumptions. Some simulation studies show the improvements of our methods over previous proposals. We also provide an application to a set of real data.

619 citations

Book
01 Jan 2003
TL;DR: In this article, a stepwise multiple testing procedure that asymptotically controls the familywise error rate is proposed, which implicitly captures the joint dependence structure of the test statistics, which results in increased ability to detect false hypotheses.
Abstract: In econometric applications, often several hypothesis tests are carried out at once. The problem then becomes how to decide which hypotheses to reject, accounting for the multitude of tests. This paper suggests a stepwise multiple testing procedure that asymptotically controls the familywise error rate. Compared to related single-step methods, the procedure is more powerful and often will reject more false hypotheses. In addition, we advocate the use of studentization when feasible. Unlike some stepwise methods, the method implicitly captures the joint dependence structure of the test statistics, which results in increased ability to detect false hypotheses. The methodology is presented in the context of comparing several strategies to a common benchmark. However, our ideas can easily be extended to other contexts where multiple tests occur. Some simulation studies show the improvements of our methods over previous proposals. We also provide an application to a set of real data.

452 citations

Journal ArticleDOI
TL;DR: In this article, it was shown that the empirical likelihood method for constructing confidence intervals is Bartlett-correctable, which means that a simple adjustment for the expected value of log-likelihood ratio reduces coverage error to an extremely low O(n −2 ) where n −2 denotes sample size.
Abstract: It is shown that, in a very general setting, the empirical likelihood method for constructing confidence intervals is Bartlett-correctable. This means that a simple adjustment for the expected value of log-likelihood ratio reduces coverage error to an extremely low $O(n^{-2})$, where $n$ denotes sample size. That fact makes empirical likelihood competitive with methods such as the bootstrap which are not Bartlett-correctable and which usually have coverage error of size $n^{-1}$. Most importantly, our work demonstrates a strong link between empirical likelihood and parametric likelihood, since the Bartlett correction had previously only been available for parametric likelihood. A general formula is given for the Bartlett correction, valid in a very wide range of problems, including estimation of mean, variance, covariance, correlation, skewness, kurtosis, mean ratio, mean difference, variance ratio, etc. The efficacy of the correction is demonstrated in a simulation study for the case of the mean.

410 citations


Cited by
More filters
Journal ArticleDOI

6,278 citations

Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Journal Article
TL;DR: Prospect Theory led cognitive psychology in a new direction that began to uncover other human biases in thinking that are probably not learned but are part of the authors' brain’s wiring.
Abstract: In 1974 an article appeared in Science magazine with the dry-sounding title “Judgment Under Uncertainty: Heuristics and Biases” by a pair of psychologists who were not well known outside their discipline of decision theory. In it Amos Tversky and Daniel Kahneman introduced the world to Prospect Theory, which mapped out how humans actually behave when faced with decisions about gains and losses, in contrast to how economists assumed that people behave. Prospect Theory turned Economics on its head by demonstrating through a series of ingenious experiments that people are much more concerned with losses than they are with gains, and that framing a choice from one perspective or the other will result in decisions that are exactly the opposite of each other, even if the outcomes are monetarily the same. Prospect Theory led cognitive psychology in a new direction that began to uncover other human biases in thinking that are probably not learned but are part of our brain’s wiring.

4,351 citations

Journal ArticleDOI
TL;DR: A joinpoint regression model is applied to describe continuous changes in the recent trend and the grid-search method is used to fit the regression function with unknown joinpoints assuming constant variance and uncorrelated errors.
Abstract: The identification of changes in the recent trend is an important issue in the analysis of cancer mortality and incidence data. We apply a joinpoint regression model to describe such continuous changes and use the grid-search method to fit the regression function with unknown joinpoints assuming constant variance and uncorrelated errors. We find the number of significant joinpoints by performing several permutation tests, each of which has a correct significance level asymptotically. Each p-value is found using Monte Carlo methods, and the overall asymptotic significance level is maintained through a Bonferroni correction. These tests are extended to the situation with non-constant variance to handle rates with Poisson variation and possibly autocorrelated errors. The performance of these tests are studied via simulations and the tests are applied to U.S. prostate cancer incidence and mortality rates.

3,950 citations