scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Applied Econometrics in 2017"


Journal ArticleDOI
TL;DR: In this article, the authors show that the rule of 42 is not true for unbalanced clusters and use critical values based on the wild cluster bootstrap to improve the performance of CRVE.
Abstract: Summary The cluster robust variance estimator (CRVE) relies on the number of clusters being sufficiently large. Monte Carlo evidence suggests that the ‘rule of 42’ is not true for unbalanced clusters. Rejection frequencies are higher for datasets with 50 clusters proportional to US state populations than with 50 balanced clusters. Using critical values based on the wild cluster bootstrap performs much better. However, this procedure fails when a small number of clusters is treated. We explain why CRVE t statistics and the wild bootstrap fail in this case, study the ‘effective number’ of clusters and simulate placebo laws with dummy variable regressors. Copyright © 2016 John Wiley & Sons, Ltd.

300 citations


Journal ArticleDOI
TL;DR: The authors showed that anticipatory behavior provides an important explanation for this result, showing that gasoline buyers increase purchases before tax increases and delay purchases after tax decreases, rendering the tax instrument endogenous including suitable leads and lags in the regression restores the validity of the IV estimator.
Abstract: Summary Least-squares estimates of the response of gasoline consumption to a change in the gasoline price are biased toward zero, given the endogeneity of gasoline prices A seemingly natural solution to this problem is to instrument for gasoline prices using gasoline taxes, but this approach tends to yield implausibly large price elasticities We demonstrate that anticipatory behavior provides an important explanation for this result Gasoline buyers increase purchases before tax increases and delay purchases before tax decreases, rendering the tax instrument endogenous Including suitable leads and lags in the regression restores the validity of the IV estimator, resulting in much lower elasticity estimates Copyright © 2016 John Wiley & Sons, Ltd

124 citations


Journal ArticleDOI
TL;DR: In this paper, the role played by loan supply shocks over the business cycle in the euro area, UK and USA from 1980 to 2011 by estimating time-varying parameter vector autoregression models with stochastic volatility and identifying these shocks with sign restrictions consistent with the recent macroeconomic literature.
Abstract: Summary This paper provides empirical evidence on the role played by loan supply shocks over the business cycle in the euro area, the UK and the USA from 1980 to 2011 by estimating time-varying parameter vector autoregression models with stochastic volatility and identifying these shocks with sign restrictions consistent with the recent macroeconomic literature. The evidence suggests that in all three economic areas loan supply shocks appear to have a significant effect, with clear signs of an increasing impact over the past few years. Moreover, the role of loan supply shocks is estimated to be particularly important during recessions. Copyright © 2016 John Wiley & Sons, Ltd.

109 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models and propose Lasso type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability.
Abstract: Summary We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso-type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast realized covariance matrices almost as precisely as if we had known the true driving dynamics of these in advance. We next investigate the sources of these driving dynamics as well as the performance of the proposed models for forecasting the realized covariance matrices of the 30 Dow Jones stocks. We find that the dynamics are not stable as the data are aggregated from the daily to lower frequencies. Furthermore, we are able beat our benchmark by a wide margin. Finally, we investigate the economic value of our forecasts in a portfolio selection exercise and find that in certain cases an investor is willing to pay a considerable amount in order get access to our forecasts. Copyright © 2016 John Wiley & Sons, Ltd.

84 citations


Journal ArticleDOI
TL;DR: A narrow replication of their key results is provided, using the open source R software instead of the original GAUSS routines, to exactly replicate their results on convergence clubs, corresponding point estimates and standard errors.
Abstract: Summary Phillips and Sul (Journal of Applied Econometrics 2009, 24, 1153–1185) provide an algorithm to identify convergence clubs in a dynamic factor model of economic transition and growth. We provide a narrow replication of their key results, using the open source R software instead of the original GAUSS routines. We are able to exactly replicate their results on convergence clubs, corresponding point estimates and standard errors. We comment on minor differences between their reported results and their clustering algorithm. We propose simple adjustments of the original algorithm to make manual intervention unnecessary. The adjustments allow automated application of the algorithm to other data. Copyright © 2016 John Wiley & Sons, Ltd.

69 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare two program evaluation methodologies: the synthetic control method and the panel data approach, and apply both methods to estimate the effect of the political and economic integration of Hong Kong.
Abstract: Summary We compare two program evaluation methodologies: the synthetic control method and the panel data approach. We apply both methods to estimate the effect of the political and economic integration of Hong Kong. The results obtained differ depending on the methodology used. We then conduct a simulation that shows that the synthetic control method results in a post-treatment mean squared error, mean absolute percentage error, and mean error with a smaller interquartile range, whenever there is a good enough match. Copyright © 2016 John Wiley & Sons, Ltd.

67 citations


Journal ArticleDOI
TL;DR: The authors incorporated text data from MLS listings into a hedonic pricing model using a tokenization approach to estimate implicit prices for various words and phrases in the comments section of the MLS, which is populated by real estate agents who arguably have the most local market knowledge and know what homebuyers value.
Abstract: Summary This paper incorporates text data from MLS listings into a hedonic pricing model. We show that the comments section of the MLS, which is populated by real estate agents who arguably have the most local market knowledge and know what homebuyers value, provides information that improves the performance of both in-sample and out-of-sample pricing estimates. Text is found to decrease pricing error by more than 25%. Information from text is incorporated into a linear model using a tokenization approach. By doing so, the implicit prices for various words and phrases are estimated. The estimation focuses on simultaneous variable selection and estimation for linear models in the presence of a large number of variables using a penalized regression. The LASSO procedure and variants are shown to outperform least-squares in out-of-sample testing. Copyright © 2016 John Wiley & Sons, Ltd.

50 citations


Journal ArticleDOI
TL;DR: In this article, the authors apply the DSGE-VAR methodology to assess the size of fiscal multipliers in the data and the relative contributions of two transmission mechanisms of government spending shocks, namely hand-to-mouth consumers and Edgeworth complementarity.
Abstract: This paper applies the DSGE-VAR methodology to assess the size of fiscal multipliers in the data and the relative contributions of two transmission mechanisms of government spending shocks, namely hand-to-mouth consumers and Edgeworth complementarity. Econometric experiments show that a DSGE model with Edgeworth complementarity is a better representation of the transmission mechanism of fiscal policy as it yields dynamic responses close to those obtained with the flexible DSGE-VAR model (i.e. an impact output multiplier larger than one and a crowding-in of private consumption). The estimated share of hand-to-mouth consumers is too small to replicate the positive response of private consumption.

50 citations


Journal ArticleDOI
TL;DR: In this paper, a preliminary analysis based on the estimation of the exponent of cross-sectional correlation proposed by Bailey et al. provides a very clear-cut result with an estimate of a very close to unity, not only indicating the presence of strong crosssectional correlation but also being consistent with the factor literature typically assuming that a = 1.
Abstract: This paper provides an econometric examination of geographic R&D spillovers among countries by focusing on the issue of cross-sectional dependence, and in particular on the different ways – weak and strong – it may affect the model. A preliminary analysis based on the estimation of the exponent of cross-sectional correlation proposed by Bailey et al.(2013), a, provides a very clear-cut result with an estimate of a very close to unity, not only indicating the presence of strong cross-sectional correlation but also being consistent with the factor literature typically assuming that a = 1. Moreover, second generation unit roots tests suggest that while the unobserved idiosyncratic component of the variables under study may be stationary, the unobserved common factors appear to be nonstationary. Consequently, a factor structure appears to be preferable to a spatial error model and in particular the Correlated Common Effects approach is employed since, among other things, it is still valid in the more general case of nonstationary common factors. Finally, comparing the results with those obtained with a spatial model gives some insights on the possible bias occurring when allowing only for weak correlation while strong correlation is present in the data.

50 citations


Journal ArticleDOI
TL;DR: In this article, treatment monotonicity and/or dominance assumptions are invoked to derive sharp bounds on the average treatment effects on the treated, as well as on other groups, and they use their methods to assess the educational impact of a school voucher program in Colombia.
Abstract: In the presence of an endogenous binary treatment and a valid binary instrument, causal effects are point identified only for the subpopulation of compliers, given that the treatment is monotone in the instrument. With the exception of the entire population, causal inference for further subpopulations has been widely ignored in econometrics. We invoke treatment monotonicity and/or dominance assumptions to derive sharp bounds on the average treatment effects on the treated, as well as on other groups. Furthermore, we use our methods to assess the educational impact of a school voucher program in Colombia and discuss testable implications of our assumptions.

43 citations


Journal ArticleDOI
TL;DR: In this article, a new panel-based approach to predictability that is both robust and powerful is proposed, where the cross-section variation is exploited also to achieve robustness with respect to the predictor.
Abstract: The difficulty of predicting returns has recently motivated researchers to start looking for tests that are either more powerful or robust to more features of the data. Unfortunately, the way that these tests work typically involves trading robustness for power or vice versa. The current paper takes this as its starting point to develop a new panel-based approach to predictability that is both robust and powerful. Specifically, while the panel route to increased power is not new, the way in which the cross-section variation is exploited also to achieve robustness with respect to the predictor is. The result is two new tests that enable asymptotically standard normal and chi-squared inference across a wide range of empirically relevant scenarios in which the predictor may be stationary, moderately non-stationary, nearly non-stationary, or indeed unit root non-stationary. The type of cross-section dependence that can be permitted in the predictor is also very general, and can be weak or strong, although we do require that the cross-section dependence in the regression errors is of the strong form. What is more, this generality comes at no cost in terms of complicated test construction. The new tests are therefore very user-friendly.

Journal ArticleDOI
TL;DR: This paper examined the causal link that runs from classroom quality to student achievement using data on twin pairs who entered the same school but were allocated to different classrooms in an exogenous way, and found that the test performance of all students improves with teacher experience.
Abstract: This paper examines the causal link that runs from classroom quality to student achievement using data on twin pairs who entered the same school but were allocated to different classrooms in an exogenous way. In particular, we apply twin fixed-effects estimation to assess the effect of teacher quality on student test scores from second through eighth grade of primary education, arguing that a change in teacher quality is probably the most important classroom intervention within a twin context. In a series of estimations using measurable teacher characteristics, we find that (a) the test performance of all students improves with teacher experience; (b) teacher experience also matters for student performance after the initial years in the profession; (c) the teacher experience effect is most prominent in earlier grades; (d) the teacher experience effects are robust to the inclusion of other classroom quality measures, such as peer group composition and class size; and (e) an increase in teacher experience also matters for career stages with less labor market mobility, which suggests positive returns to on-the-job learning of teachers.

Journal ArticleDOI
TL;DR: In this paper, a convex optimization reformulation of the nonparametric maximum likelihood estimator of Kiefer and Wolfowitz (Annals of Mathematical Statistics 1956; 27: 887-906) is employed to construct non-parametric Bayes rules for compound decisions.
Abstract: Summary Empirical Bayes methods for Gaussian and binomial compound decision problems involving longitudinal data are considered. A recent convex optimization reformulation of the nonparametric maximum likelihood estimator of Kiefer and Wolfowitz (Annals of Mathematical Statistics 1956; 27: 887–906) is employed to construct nonparametric Bayes rules for compound decisions. The methods are illustrated with an application to predict baseball batting averages, and the age profile of batting performance. An important aspect of the empirical application is the general bivariate specification of the distribution of heterogeneous location and scale effects for players that exhibits a weak positive association between location and scale attributes. Prediction of players' batting averages for 2012 based on performance in the prior decade using the proposed methods shows substantially improved performance over more naive methods with more restrictive treatment of unobserved heterogeneity. Comparisons are also made with nonparametric Bayesian methods based on Dirichlet process priors, which can be viewed as a regularized, or smoothed, version of the Kiefer–Wolfowitz method. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper presented an early warning system as a set of multi-period forecasts of indicators of tail real and financial risks obtained using a large database of monthly US data for the period 1972:1-2014:12.
Abstract: Summary This paper presents an early warning system as a set of multi-period forecasts of indicators of tail real and financial risks obtained using a large database of monthly US data for the period 1972:1–2014:12. Pseudo-real-time forecasts are generated from: (a) sets of autoregressive and factor-augmented vector autoregressions (VARs), and (b) sets of autoregressive and factor-augmented quantile projections. Our key finding is that forecasts obtained with AR and factor-augmented VAR forecasts significantly underestimate tail risks, while quantile projections deliver fairly accurate forecasts and reliable early warning signals for tail real and financial risks up to a 1-year horizon. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper proposed a 1-penalized quantile regression estimator for panel data, which explicitly allows for individual heterogeneity associated with covariates and found evidence that positive substitution effects dominate negative wealth effects at the middle of the conditional distribution of hours.
Abstract: This paper proposes new ?1-penalized quantile regression estimators for panel data, which explicitly allows for individual heterogeneity associated with covariates. We conduct Monte Carlo simulations to assess the small sample performance of the new estimators and provide comparisons of new and existing penalized estimators in terms of quadratic loss. We apply the techniques to two empirical studies. First, the new method is applied to the estimation of labor supply elasticities and we find evidence that positive substitution effects dominate negative wealth effects at the middle of the conditional distribution of hours. The overall effect tends to be larger at the lower tail, which suggests that changes in taxes have different effects across the response distribution. Second, we estimate consumer preferences for nutrients from a demand model using a large scanner dataset of household food purchases. We show that preferences for nutrients vary across the conditional distribution of expenditure and across genders, and emphasize the importance of fully capturing consumer heterogeneity in demand modeling. Both applications highlight the importance of estimating individual heterogeneity when designing economic policy.

Journal ArticleDOI
TL;DR: The application of this framework to the S&P500 shows that rule-based methods are preferable for (in-sample) identification of the market state, but regime-switching models for (out-of- sample) forecasting are preferred.
Abstract: textThe state of the equity market, often referred to as a bull or a bear market, is of key importance for financial decisions and economic analyses. Its latent nature has led to several methods to identify past and current states of the market and forecast future states. These methods encompass semi-parametric rule-based methods and parametric regime-switching models. We compare these methods by new statistical and economic measures that take into account the latent nature of the market state. The statistical measure is based directly on the predictions, while the economic mea- sure is based on the utility that results when a risk-averse agent uses the predictions in an investment decision. Our application of this framework to the S&P500 shows that rule-based methods are preferable for (in-sample) identification of the market state, but regime-switching models for (out-of-sample) forecasting. In-sample only the direction of the market matters, but for forecasting both means and volatilities of returns are important. Both the statistical and the economic measures indicate that these differences are significant.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the dynamic properties of systematic default risk conditions for firms in different countries, industries and rating groups, using a high-dimensional nonlinear non-Gaussian state-space model to estimate common components in corporate defaults in a 41 country samples between 1980:Q1 and s2014:Q4, covering both the global financial crisis and euro area sovereign debt crisis.
Abstract: We investigate the dynamic properties of systematic default risk conditions for firms in different countries, industries and rating groups. We use a high-dimensional nonlinear non-Gaussian state-space model to estimate common components in corporate defaults in a 41 country samples between 1980:Q1 and s2014:Q4, covering both the global financial crisis and euro area sovereign debt crisis. We find that macro and default-specific world factors are a primary source of default clustering across countries. Defaults cluster more than what shared exposures to macro factors imply, indicating that other factors also play a significant role. For all firms, deviations of systematic default risk from macro fundamentals are correlated with net tightening bank lending standards, suggesting that bank credit supply and systematic default risk are inversely related.

Journal ArticleDOI
TL;DR: This article developed a non-Gaussian modeling framework to infer measures of conditional and joint default risk for numerous financial sector firms based on a dynamic Generalized Hyperbolic Skewed-t block-equicorrelation copula with time-varying volatility and dependence parameters.
Abstract: We develop a novel high-dimensional non-Gaussian modeling framework to infer measures of conditional and joint default risk for numerous financial sector firms. The model is based on a dynamic Generalized Hyperbolic Skewed-t block-equicorrelation copula with time-varying volatility and dependence parameters that naturally accommodates asymmetries, heavy tails, as well as non-linear and time-varying default dependence. We apply a conditional law of large numbers in this setting to define joint and conditional risk measures that can be evaluated quickly and reliably. We apply the modeling framework to assess the joint risk from multiple defaults in the euro area during the 2008-2012 financial and sovereign debt crisis. We document unprecedented tail risks between 2011-2012, as well as their steep decline following subsequent policy actions. JEL Classification: G21, C32

Journal ArticleDOI
TL;DR: This article investigated measurement error biases in estimated poverty transition matrices and found that time-varying measurement error in expenditure data magnifies economic mobility, and that when measurement error is removed, this drops to between 26 and 31% of households initially in poverty.
Abstract: Summary This paper investigates measurement error biases in estimated poverty transition matrices. We compare transition matrices based on survey expenditure data to transition matrices based on measurement-error-free simulated expenditure. The simulation model uses estimates that correct for measurement error in expenditure. We find that time-varying measurement error in expenditure data magnifies economic mobility. Roughly 45% of households initially in poverty at time t − 1 are found to be out of poverty at time t using data from the Korean Labor and Income Panel Study. When measurement error is removed, this drops to between 26 and 31% of households initially in poverty. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, a joint Hawkes process in conjunction with a bivariate jump diffusion is used to model dynamic jumps in the price and volatility of an asset using a state-space representation, with Bayesian inference conducted using a Markov chain Monte Carlo algorithm.
Abstract: Summary Dynamic jumps in the price and volatility of an asset are modelled using a joint Hawkes process in conjunction with a bivariate jump diffusion. A state-space representation is used to link observed returns, plus nonparametric measures of integrated volatility and price jumps, to the specified model components, with Bayesian inference conducted using a Markov chain Monte Carlo algorithm. An evaluation of marginal likelihoods for the proposed model relative to a large number of alternative models, including some that have featured in the literature, is provided. An extensive empirical investigation is undertaken using data on the S&P 500 market index over the 1996–2014 period, with substantial support for dynamic jump intensities—including in terms of predictive accuracy—documented. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors construct a new measure of mortgage credit availability that describes the maximum amount obtainable by a borrower of given characteristics using mortgage originations data from 2001 to 2014 and show that it reflects a binding borrowing constraint.
Abstract: We construct a new measure of mortgage credit availability that describes the maximum amount obtainable by a borrower of given characteristics. We estimate this \"loan frontier\" using mortgage originations data from 2001 to 2014 and show that it reflects a binding borrowing constraint. Our estimates reveal that the expansion of mortgage credit during the housing boom was substantial for all borrowers, not only for low-score or low-income borrowers. The contraction was most pronounced for low-score borrowers. Using variation in the frontier across metropolitan areas over time, we show that borrowing constraints played an important role in the recent housing cycle.

Journal ArticleDOI
Tatjana Dahlhaus1
TL;DR: In this paper, the effects of monetary policy shocks in the USA during periods of high financial stress were investigated. But the authors focused on the effects on macroeconomic variables such as output, consumption, and investment.
Abstract: Summary This paper studies the effects of a conventional monetary policy shock in the USA during times of high financial stress. The analysis is carried out by introducing a smooth transition factor model where the transition between states (‘normal’ and high financial stress) depends on a financial conditions index. Employing a quarterly dataset over the period 1970:Q1 to 2008:Q4 containing 108 US macroeconomic and financial time series, I find that a monetary policy shock during periods of high financial stress has stronger and more persistent effects on macroeconomic variables such as output, consumption and investment than it has during ‘normal’ times. Differences in effects among the regimes seem to originate from nonlinearities in both components of the credit channel, i.e. the balance sheet channel and the bank-lending channel. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper proposes several new estimators for dynamic panel data models when data are irregularly spaced and compares their finite sample performance to the naive application of existing estimators.
Abstract: Summary With the increased availability of longitudinal data, dynamic panel data models have become commonplace. Moreover, the properties of various estimators of such models are well known. However, we show that these estimators break down when the data are irregularly spaced along the time dimension. Unfortunately, this is an increasingly frequent occurrence as many longitudinal surveys are collected at non-uniform intervals and no solution is currently available when time-varying covariates are included in the model. In this paper, we propose two new estimators for dynamic panel data models when data are irregularly spaced and compare their finite-sample performance to the naive application of existing estimators. We illustrate the practical importance of this issue in an application concerning early childhood development. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper develops solutions for linearized models with forward‐looking expectations and structural changes under a variety of assumptions regarding agents' beliefs about those structural changes by showing how its associated likelihood function can be constructed by using a ‘backward–forward’ algorithm.
Abstract: Summary In this paper, we develop solutions for linearized models with forward-looking expectations and structural changes under a variety of assumptions regarding agents' beliefs about those structural changes. For each solution, we show how its associated likelihood function can be constructed by using a ‘backward–forward’ algorithm. We illustrate the techniques with two examples. The first considers an inflationary program in which beliefs about the inflation target evolve differently from the inflation target itself, and the second applies the techniques to estimate a new Keynesian model through the Volcker disinflation. We compare our methodology with the alternative in which structural change is captured by switching between regimes via a Markov switching process. We show that our method can produce accurate results much faster than the Markov switching method as well as being easily adapted to handle beliefs departing from reality. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a forecasting method that minimizes the effects of weak predictors and estimation errors on the accuracy of equity premium forecasts, based on an averaging scheme applied to quantiles conditional on predictors selected by LASSO.
Abstract: Summary This paper develops a novel forecasting method that minimizes the effects of weak predictors and estimation errors on the accuracy of equity premium forecasts. The proposed method is based on an averaging scheme applied to quantiles conditional on predictors selected by LASSO. The resulting forecasts outperform the historical average, and other existing models, by statistically and economically meaningful margins. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The authors assess the perception of professional forecasters regarding the effectiveness of unconventional monetary policy measures undertaken by the U.S. Federal Reserve after the collapse of LehmanBrothers and find that bond yields are expected to drop significantly for at least one year after the announcement and the implementation of accommodative policies.
Abstract: We assess the perception of professional forecasters regarding the effectiveness of unconventionalmonetary policy measures undertaken by the U.S. Federal Reserve after the collapse of LehmanBrothers. Using individual survey data, we analyse the changes in forecasting of bond yields aroundthe announcement and implementation dates of non-standard monetary policies. The resultsindicate that bond yields are expected to drop significantly for at least one year after theannouncement and the implementation of accommodative policies.

Journal ArticleDOI
TL;DR: This paper estimates the factor model within a Bayesian framework, specifying a sparse prior distribution for the factor loadings, and provides alternative methods to identify relevant and irrelevant variables.
Abstract: Summary This paper considers factor estimation from heterogeneous data, where some of the variables—the relevant ones—are informative for estimating the factors, and others—the irrelevant ones—are not. We estimate the factor model within a Bayesian framework, specifying a sparse prior distribution for the factor loadings. Based on identified posterior factor loading estimates, we provide alternative methods to identify relevant and irrelevant variables. Simulations show that both types of variables are identified quite accurately. Empirical estimates for a large multi-country GDP dataset and a disaggregated inflation dataset for the USA show that a considerable share of variables is irrelevant for factor estimation.

Journal ArticleDOI
TL;DR: In this article, the authors present a method to estimate jointly the parameters of a standard commodity storage model and the parameters characterizing the trend in commodity prices, and show that storage models with trend are always preferred to models without trend.
Abstract: We present a method to estimate jointly the parameters of a standard commodity storage model and the parameters characterizing the trend in commodity prices. This procedure allows the influence of a possible trend to be removed without restricting the model specification, and allows model and trend selection based on statistical criteria. The trend is modeled deterministically using linear or cubic spline functions of time. The results show that storage models with trend are always preferred to models without trend. They yield more plausible estimates of the structural parameters, with storage costs and demand elasticities that are more consistent with the literature. They imply occasional stockouts, whereas without trend the estimated models predict no stockouts over the sample period for most commodities. Moreover, accounting for a trend in the estimation imply price moments closer to those observed in commodity prices. Our results support the empirical relevance of the speculative storage model, and show that storage model estimations should not neglect the possibility of long-run price trends.

ReportDOI
TL;DR: In this paper, the authors studied the effect of funding costs on the conditional probability of issuing a corporate bond and found that for non-financial firms yields are negatively related to bond issuance but that the effect is larger in the pre-crisis period.
Abstract: Summary What is the effect of funding costs on the conditional probability of issuing a corporate bond? We study this question in a novel dataset covering 5610 issuances by US firms over the period from 1990 to 2014. Identification of this effect is complicated because of unobserved, common shocks such as the global financial crisis. To account for these shocks, we extend the common correlated effects estimator to settings where outcomes are discrete. Both the asymptotic properties and the small-sample behavior of this estimator are documented. We find that for non-financial firms yields are negatively related to bond issuance but that the effect is larger in the pre-crisis period.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the bias in estimated impulse responses in a factor-augmented vector autoregressive (FAVAR) model is positively related to the strength of the error correction mechanism and the cross-section dimension of the panel.
Abstract: Summary Starting from the dynamic factor model for nonstationary data we derive the factor-augmented error correction model (FECM) and its moving-average representation. The latter is used for the identification of structural shocks and their propagation mechanisms. We show how to implement classical identification schemes based on long-run restrictions in the case of large panels. The importance of the error correction mechanism for impulse response analysis is analyzed by means of both empirical examples and simulation experiments. Our results show that the bias in estimated impulse responses in a factor-augmented vector autoregressive (FAVAR) model is positively related to the strength of the error correction mechanism and the cross-section dimension of the panel. We observe empirically in a large panel of US data that these features have a substantial effect on the responses of several variables to the identified permanent real (productivity) and monetary policy shocks.