scispace - formally typeset
Search or ask a question
Author

Roberto S. Mariano

Other affiliations: Singapore Management University
Bio: Roberto S. Mariano is an academic researcher from University of Pennsylvania. The author has contributed to research in topics: Econometric model & Monte Carlo method. The author has an hindex of 26, co-authored 101 publications receiving 14436 citations. Previous affiliations of Roberto S. Mariano include Singapore Management University.


Papers
More filters
Posted Content
TL;DR: The authors describes the advantages of these studies and suggests how they can be improved and also provides aids in judging the validity of inferences they draw, such as multiple treatment and comparison groups and multiple pre- or post-intervention observations.
Abstract: Using research designs patterned after randomized experiments, many recent economic studies examine outcome measures for treatment groups and comparison groups that are not randomly assigned. By using variation in explanatory variables generated by changes in state laws, government draft mechanisms, or other means, these studies obtain variation that is readily examined and is plausibly exogenous. This paper describes the advantages of these studies and suggests how they can be improved. It also provides aids in judging the validity of inferences they draw. Design complications such as multiple treatment and comparison groups and multiple pre- or post-intervention observations are advocated.

7,222 citations

ReportDOI
TL;DR: In this article, explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts are proposed and evaluated, and asymptotic and exact finite-sample tests are proposed, evaluated and illustrated.
Abstract: We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic and need not even be symmetric), and forecast errors can be non-Gaussian, nonzero mean, serially correlated, and contemporaneously correlated. Asymptotic and exact finite-sample tests are proposed, evaluated, and illustrated.

5,628 citations

Journal ArticleDOI
TL;DR: In this paper, the authors apply the Kalman filter to a state-space representation of a factor model and evaluate the likelihood function, which is essentially the smoothed estimate of latent monthly real GDP and should improve upon the Stock-Watson index.
Abstract: Maximum likelihood factor analysis of time series is possible even when some series are quarterly and others are monthly. Treating quarterly series as monthly series with missing observations and replacing them with artificial observations independent of the model parameters, one can apply the Kalman filter to a state-space representation of a factor model and evaluate the likelihood function. An application to quarterly real GDP and monthly coincident business cycle indicators gives a new coincident index of business cycles. The new index is essentially the smoothed estimate of latent monthly real GDP and should improve upon the Stock‐Watson index.

560 citations

Journal ArticleDOI
TL;DR: The authors showed that the sensitivity of consumption to transitory income is not due to liquidity constraints, but rather due to the inadequate estimation procedure of the consumption function, which supports tax discounting.

163 citations

Journal ArticleDOI
TL;DR: In this article, a Gaussian vector autoregression (VAR) and factor model for real gross domestic product (GDP) and other coincident indicators using the observable mixed-frequency series is proposed.
Abstract: The Stock–Watson coincident index and its subsequent extensions assume a static linear one-factor model for the component indicators. This restrictive assumption is unnecessary if one defines a coincident index as an estimate of monthly real gross domestic products (GDP). This paper estimates Gaussian vector autoregression (VAR) and factor models for latent monthly real GDP and other coincident indicators using the observable mixed-frequency series. For maximum likelihood estimation of a VAR model, the expectation-maximization (EM) algorithm helps in finding a good starting value for a quasi-Newton method. The smoothed estimate of latent monthly real GDP is a natural extension of the Stock–Watson coincident index.

155 citations


Cited by
More filters
Book
01 Jan 2003
TL;DR: In this paper, the authors describe the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation, and compare simulation-assisted estimation procedures, including maximum simulated likelihood, method of simulated moments, and methods of simulated scores.
Abstract: This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum simulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. No other book incorporates all these fields, which have arisen in the past 20 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing.

7,768 citations

Posted Content
TL;DR: The authors describes the advantages of these studies and suggests how they can be improved and also provides aids in judging the validity of inferences they draw, such as multiple treatment and comparison groups and multiple pre- or post-intervention observations.
Abstract: Using research designs patterned after randomized experiments, many recent economic studies examine outcome measures for treatment groups and comparison groups that are not randomly assigned. By using variation in explanatory variables generated by changes in state laws, government draft mechanisms, or other means, these studies obtain variation that is readily examined and is plausibly exogenous. This paper describes the advantages of these studies and suggests how they can be improved. It also provides aids in judging the validity of inferences they draw. Design complications such as multiple treatment and comparison groups and multiple pre- or post-intervention observations are advocated.

7,222 citations

ReportDOI
TL;DR: In this article, explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts are proposed and evaluated, and asymptotic and exact finite-sample tests are proposed, evaluated and illustrated.
Abstract: We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic and need not even be symmetric), and forecast errors can be non-Gaussian, nonzero mean, serially correlated, and contemporaneously correlated. Asymptotic and exact finite-sample tests are proposed, evaluated, and illustrated.

5,628 citations

ReportDOI
TL;DR: In this paper, the authors developed asymptotic distribution theory for instrumental variable regression when the partial correlation between the instruments and a single included endogenous variable is weak, here modeled as local to zero.
Abstract: This paper develops asymptotic distribution theory for instrumental variable regression when the partial correlation between the instruments and a single included endogenous variable is weak, here modeled as local to zero. Asymptotic representations are provided for various instrumental variable statistics, including the two-stage least squares (TSLS) and limited information maximum- likelihood (LIML) estimators and their t-statistics. The asymptotic distributions are found to provide good approximations to sampling distributions with just 20 observations per instrument. Even in large samples, TSLS can be badly biased, but LIML is, in many cases, approximately median unbiased. The theory suggests concrete quantitative guidelines for applied work. These guidelines help to interpret Angrist and Krueger's (1991) estimates of the returns to education: whereas TSLS estimates with many instruments approach the OLS estimate of 6%, the more reliable LIML and TSLS estimates with fewer instruments fall between 8% and 10%, with a typical confidence interval of (6%, 14%).

5,249 citations

Journal ArticleDOI
TL;DR: In this paper, the relationship between aggregate productivity and stock and flow government-spending variables is investigated and the empirical results indicate that the non-military public capital stock is dramatically more important in determining productivity than is either the flow of nonmilitary or military spending, and that military capital bears little relation to productivity.

5,163 citations