scispace - formally typeset
Search or ask a question

Showing papers on "Heteroscedasticity published in 2007"


Journal ArticleDOI
TL;DR: In this paper, a Bayesian method was proposed to account for measurement errors in linear regression of astronomical data. The method is based on deriving a likelihood function for the measured data, and focus on the case when the intrinsic distribution of the independent variables can be approximated using a mixture of Gaussian functions.
Abstract: I describe a Bayesian method to account for measurement errors in linear regression of astronomical data. The method allows for heteroscedastic and possibly correlated measurement errors and intrinsic scatter in the regression relationship. The method is based on deriving a likelihood function for the measured data, and I focus on the case when the intrinsic distribution of the independent variables can be approximated using a mixture of Gaussian functions. I generalize the method to incorporate multiple independent variables, nondetections, and selection effects (e.g., Malmquist bias). A Gibbs sampler is described for simulating random draws from the probability distribution of the parameters, given the observed data. I use simulation to compare the method with other common estimators. The simulations illustrate that the Gaussian mixture model outperforms other common estimators and can effectively give constraints on the regression parameters, even when the measurement errors dominate the observed scatter, source detection fraction is low, or the intrinsic distribution of the independent variables is not a mixture of Gaussian functions. I conclude by using this method to fit the X-ray spectral slope as a function of Eddington ratio using a sample of 39 z 0.8 radio-quiet quasars. I confirm the correlation seen by other authors between the radio-quiet quasar X-ray spectral slope and the Eddington ratio, where the X-ray spectral slope softens as the Eddington ratio increases. IDL routines are made available for performing the regression.

1,264 citations


Journal ArticleDOI
TL;DR: It is argued investigators should routinely use one of these heteroskedasticity-consistent standard error estimators for OLS regression and easy-to-use SPSS and SAS macros to implement this recommendation are provided.
Abstract: Homoskedasticity is an important assumption in ordinary least squares (OLS) regression. Although the estimator of the regression parameters in OLS regression is unbiased when the homoskedasticity assumption is violated, the estimator of the covariance matrix of the parameter estimates can be biased and inconsistent under heteroskedasticity, which can produce significance tests and confidence intervals that can be liberal or conservative. After a brief description of heteroskedasticity and its effects on inference in OLS regression, we discuss a family of heteroskedasticity-consistent standard error estimators for OLS regression and argue investigators should routinely use one of these estimators when conducting hypothesis tests using OLS regression. To facilitate the adoption of this recommendation, we provide easy-to-use SPSS and SAS macros to implement the procedures discussed here.

954 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate more accurate inference using cluster bootstrap-t procedures that provide asymptotic refinement for regression parameter estimates, and evaluate these procedures using Monte Carlos, including the much-cited di ¤erences-in-di¤erences example of Bertrand, Mullainathan and Duflo.
Abstract: Microeconometrics researchers have increasingly realized the essential need to account for any within-group dependence in estimating standard errors of regression parameter estimates. The typical preferred solution is to calculate cluster-robust or sandwich standard errors that permit quite general heteroskedasticity and within-cluster error correlation, but presume that the number of clusters is large. In applications with few (5-30) clusters, standard asymptotic tests can over-reject considerably. We investigate more accurate inference using cluster bootstrap-t procedures that provide asymptotic refinement. These procedures are evaluated using Monte Carlos, including the much-cited di¤erences-in-di¤erences example of Bertrand, Mullainathan and Duflo (2004). In situations where standard methods lead to rejection rates in excess of ten percent (or more) for tests of nominal size 0:05, our methods can reduce this to five percent. In principle a pairs cluster bootstrap should work well, but in practice a Wild cluster bootstrap performs better.

915 citations


Posted Content
TL;DR: In this article, the authors bring to our reader's attention a consul-tation on this topic prepared from the book of Marno Verbeek "A Guide to Modern Econometrics" appearing soon in the Publishing House “Nauchnaya Kniga”.
Abstract: Models of Autoregressive Conditional Heteroscedasticity (ARCH) and their generalizations are widely used in ap-plied econometric research, especially for analysis of financial markets. We bring to our reader’s attention a consul-tation on this topic prepared from the book of Marno Verbeek “A Guide to Modern Econometrics” appearing soon in the Publishing House “Nauchnaya Kniga”

403 citations


Journal ArticleDOI
TL;DR: In this article, a nonparametric heteroscedasticity and autocorrelation consistent (HAC) estimator of the variance-covariance (VC) matrix for a vector of sample moments within a spatial context is proposed.

387 citations


Proceedings ArticleDOI
20 Jun 2007
TL;DR: This paper follows Goldberg et al.'s approach and model the noise variance using a second GP in addition to the GP governing the noise-free output value, using a Markov chain Monte Carlo method to approximate the posterior noise variance.
Abstract: This paper presents a novel Gaussian process (GP) approach to regression with input-dependent noise rates. We follow Goldberg et al.'s approach and model the noise variance using a second GP in addition to the GP governing the noise-free output value. In contrast to Goldberg et al., however, we do not use a Markov chain Monte Carlo method to approximate the posterior noise variance but a most likely noise approach. The resulting model is easy to implement and can directly be used in combination with various existing extensions of the standard GPs such as sparse approximations. Extensive experiments on both synthetic and real-world data, including a challenging perception problem in robotics, show the effectiveness of most likely heteroscedastic GP regression.

354 citations


Journal ArticleDOI
TL;DR: In this paper, a local linear approach is developed to estimate the time trend and coefficient functions, and the asymptotic properties of the proposed estimators, coupled with their comparisons with other methods, are established under the α-mixing conditions and without specifying the error distribution.

332 citations


Journal ArticleDOI
TL;DR: In this article, periodic extensions of dynamic long-memory regression models with autoregressive conditional heteroscedastic errors are considered for the analysis of daily electricity spot prices, and the parameters of the model with mean and variance specifications are estimated simultaneously by the method of approximate maximum likelihood.
Abstract: Novel periodic extensions of dynamic long-memory regression models with autoregressive conditional heteroscedastic errors are considered for the analysis of daily electricity spot prices. The parameters of the model with mean and variance specifications are estimated simultaneously by the method of approximate maximum likelihood. The methods are implemented for time series of 1,200–4,400 daily price observations in four European power markets. Apart from persistence, heteroscedasticity, and extreme observations in prices, a novel empirical finding is the importance of day-of-the-week periodicity in the autocovariance function of electricity spot prices. In particular, the very persistent daily log prices from the Nord Pool power exchange of Norway are effectively modeled by our framework, which is also extended with explanatory variables to capture supply-and-demand effects. The daily log prices of the other three electricity markets—EEX in Germany, Powernext in France, and APX in The Netherlands—are less...

286 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a procedure to overcome the problem of non-identifiability of distributed parameters by introducing aggregate parameters and using Bayesian inference, and they demonstrated the good performance of this approach to uncertainty analysis, particularly with respect to the fulfilment of statistical assumptions of the error model.

221 citations


Book ChapterDOI
01 Jan 2007
TL;DR: In this article, a long-term stochastic volatility model was proposed and its dynamic properties were derived and shown to be consistent with empirical findings reported in the literature on stock returns.
Abstract: Publisher Summary It is well established that while financial variables such as stock returns are serially uncorrelated over time, their squares are not. The most common way of modeling this serial correlation in volatility is by means of the generalized autoregressive conditional heteroscedasticity (GARCH) class. This chapter provides a long memory stochastic volatility model. Its dynamic properties are derived and shown to be consistent with empirical findings reported in the literature on stock returns. The model is parsimonious and appears to be a viable alternative to the asymmetric power ARCH (A-PARCH) class.

221 citations


Journal ArticleDOI
TL;DR: Inference for the fixed effects under the assumption of independent normally distributed errors with constant variance is shown to be robust when the errors are either non-gaussian or heteroscedastic, except when the error variance depends on a covariate included in the model with interaction with time.

Journal ArticleDOI
TL;DR: In this paper, the authors enhance the mixed logit model to capture additional alternative-specific unobserved variation not subject to the constant variance condition, which is independent of sources revealed through random parameters.
Abstract: Developments in simulation methods, and the computational power that is now available, have enabled open-form discrete choice models such as mixed logit to be estimated with relative ease. The random parameter (RP) form has been used to identify preference heterogeneity, which can be mapped to specific individuals through re-parameterisation of the mean and/or variance of each RP’s distribution. However this formulation depends on the selection of random parameters to reveal such heterogeneity, with any residual heterogeneity forced into the constant variance condition of the extreme value type 1 distribution of the classical multinomial logit model. In this paper we enhance the mixed logit model to capture additional alternative-specific unobserved variation not subject to the constant variance condition, which is independent of sources revealed through random parameters. An empirical example is presented to illustrate the additional information obtained from this model.

Journal ArticleDOI
TL;DR: In this article, a Bayesian method was proposed to account for measurement errors in linear regression of astronomical data. The method is based on deriving a likelihood function for the measured data, and focus on the case when the intrinsic distribution of the independent variables can be approximated using a mixture of Gaussians.
Abstract: I describe a Bayesian method to account for measurement errors in linear regression of astronomical data. The method allows for heteroscedastic and possibly correlated measurement errors, and intrinsic scatter in the regression relationship. The method is based on deriving a likelihood function for the measured data, and I focus on the case when the intrinsic distribution of the independent variables can be approximated using a mixture of Gaussians. I generalize the method to incorporate multiple independent variables, non-detections, and selection effects (e.g., Malmquist bias). A Gibbs sampler is described for simulating random draws from the probability distribution of the parameters, given the observed data. I use simulation to compare the method with other common estimators. The simulations illustrate that the Gaussian mixture model outperforms other common estimators and can effectively give constraints on the regression parameters, even when the measurement errors dominate the observed scatter, source detection fraction is low, or the intrinsic distribution of the independent variables is not a mixture of Gaussians. I conclude by using this method to fit the X-ray spectral slope as a function of Eddington ratio using a sample of 39 z < 0.8 radio-quiet quasars. I confirm the correlation seen by other authors between the radio-quiet quasar X-ray spectral slope and the Eddington ratio, where the X-ray spectral slope softens as the Eddington ratio increases.

Book ChapterDOI
01 Jan 2007
TL;DR: In this paper, it is assumed that the residuals are not only uncorrelated but also homoscedastic, i.e. that the unexplained fluctuations have no dependencies in the second moments.
Abstract: All models discussed so far use the conditional expectation to describe the mean development of one or more time series. The optimal forecast, in the sense that the variance of the forecast errors will be minimised, is given by the conditional mean of the underlying model. Here, it is assumed that the residuals are not only uncorrelated but also homoscedastic, i.e. that the unexplained fluctuations have no dependencies in the second moments. However, BENOIT MANDELBROT (1963) already showed that financial market data have more outliers than would be compatible with the (usually assumed) normal distribution and that there are ‘volatility clusters’: small (large) shocks are again followed by small (large) shocks. This may lead to ‘leptokurtic distributions‘, which – as compared to a normal distribution – exhibit more mass at the centre and at the tails of the distribution. This results in ‘excess kurtosis’, i.e. the values of the kurtosis are above three.

Journal ArticleDOI
TL;DR: In this article, the asymptotic properties of the least-squares estimator of the autoregressive sieve parameters were derived for stationary linear processes with conditional heteroskedasticity.
Abstract: The main contribution of this paper is a proof of the asymptotic validity of the application of the bootstrap to AR(∞) processes with unmodelled conditional heteroskedasticity. We first derive the asymptotic properties of the least-squares estimator of the autoregressive sieve parameters when the data are generated by a stationary linear process with martingale difference errors that are possibly subject to conditional heteroskedasticity of unknown form. These results are then used in establishing that a suitably constructed bootstrap estimator will have the same limit distribution as the least-squares estimator. Our results provide theoretical justification for the use of either the conventional asymptotic approximation based on robust standard errors or the bootstrap approximation of the distribution of autoregressive parameters. A simulation study suggests that the bootstrap approach tends to be more accurate in small samples.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed how outliers affect the identification of conditional heteroscedasticity and the estimation of generalized autoregressive conditionally heteroScedastic (GARCH) models and derived the asymptotic biases of the sample autocorrelations of squared observations generated by stationary processes.
Abstract: . This paper analyses how outliers affect the identification of conditional heteroscedasticity and the estimation of generalized autoregressive conditionally heteroscedastic (GARCH) models. First, we derive the asymptotic biases of the sample autocorrelations of squared observations generated by stationary processes and show that the properties of some conditional homoscedasticity tests can be distorted. Second, we obtain the asymptotic and finite sample biases of the ordinary least squares (OLS) estimator of ARCH(p) models. The finite sample results are extended to generalized least squares (GLS), maximum likelihood (ML) and quasi-maximum likelihood (QML) estimators of ARCH(p) and GARCH(1,1) models. Finally, we show that the estimated asymptotic standard deviations are biased estimates of the sample standard deviations.

Journal ArticleDOI
TL;DR: The authors show that unexplained cross-sectional variation in true abnormal returns plausibly produces non-proportional heteroskedasticity in crosssectional regressions, biasing coefficient standard errors for both ordinary and weighted least squares.
Abstract: We demonstrate analytically that cross-sectional variation in the effects of events, i.e., in trae abnormal returns, necessarily produces event-induced variance increases, biasing popular tests for mean abnormal returns in short-horizon event studies. We show that unexplained cross-sectional variation in true abnormal returns plausibly produces non proportional heteroskedasticity in cross-sectional regressions, biasing coefficient standard errors for both ordinary and weighted least squares. Simulations highlight the resulting biases, the necessity of using tests robust to cross-sectional variation, and the power of ro bust tests, including regression-based tests for nonzero mean abnormal returns, which may increase power by conditioning on relevant explanatory variables.

Journal ArticleDOI
TL;DR: In this paper, an objective function is derived based on robust statistics of the time series under consideration, which takes into account stylized facts about the unconditional distribution of exchange rate returns and properties of the conditional distribution, in particular autoregressive conditional heteroscedasticity and long memory.
Abstract: The assessment of models of financial market behaviour requires evaluation tools When complexity hinders a direct estimation approach, eg, for agent based microsimulation models, simulation based estimators might provide an alternative In order to apply such techniques, an objective function is required, which should be based on robust statistics of the time series under consideration Based on the identification of robust statistics of foreign exchange rate time series in previous research, an objective function is derived This function takes into account stylized facts about the unconditional distribution of exchange rate returns and properties of the conditional distribution, in particular, autoregressive conditional heteroscedasticity and long memory A bootstrap procedure is used to obtain an estimate of the variance-covariance matrix of the different moments included in the objective function, which is used as a base for the weighting matrix Finally, the properties of the objective function are analyzed for two different agent based models of the foreign exchange market, a simple GARCH-model and a stochastic volatility model using the DM/US-$ exchange rate as a benchmark It is also discussed how the results might be used for inference purposes

Journal ArticleDOI
TL;DR: This article showed that the impact of unions on wages is over-estimated by a magnitude of 20-30 percent when estimates from log-wage regressions are used for inference, under weaker assumptions about the dependence between the error term and the regressors.

Journal ArticleDOI
TL;DR: A heteroscedastic regression model is presented and it is shown that the variance of a random variable is positively correlated with both dispersion among experts' forecasts and scale and the effects of dispersion and scale on variance of forecast error are consistent over time.
Abstract: Measuring demand uncertainty is a key activity in supply chain planning, but it is difficult when demand history is unavailable, such as for new products. One method that can be applied in such cases uses dispersion among forecasting experts as a measure of demand uncertainty. This paper provides a test for this method and presents a heteroscedastic regression model for estimating the variance of demand using dispersion among experts' forecasts and scale. We test this methodology using three data sets: demand data at item level, sales data at firm level for retailers, and sales data at firm level for manufacturers. We show that the variance of a random variable (demand and sales for our data sets) is positively correlated with both dispersion among experts' forecasts and scale: The variance increases sublinearly with dispersion and more than linearly with scale. Further, we use longitudinal data sets with sales forecasts made three to nine months before the earnings report date for retailers and manufacturers to show that the effects of dispersion and scale on variance of forecast error are consistent over time.

Journal ArticleDOI
TL;DR: The authors developed a Markov regime-switching time-varying correlation generalized autoregressive conditional heteroscedasticity (RS-TVC GARCH) model for estimating optimal hedge ratios.
Abstract: The authors develop a Markov regime-switching time-varying correlation generalized autoregressive conditional heteroscedasticity (RS-TVC GARCH) model for estimating optimal hedge ratios. The RS-TVC nests within it both the time-varying correlation GARCH (TVC) and the constant correlation GARCH (CC). Point estimates based on the Nikkei 225 and the Hang Seng index futures data show that the RS-TVC outperforms the CC and the TVC both in- and out-of-sample in terms of variance reduction. Based on H. White's (2000) reality check, the null hypothesis of no improvement of the RS-TVC over the TVC is rejected for the Nikkei 225 index contract but is not rejected for the Hang Seng index contract. © 2007 Wiley Periodicals, Inc. Jrl Fut Mark 27:495–516, 2007

Journal ArticleDOI
TL;DR: In this article, Lagrange multiplier-based tests for the null hypothesis of no cointegration were proposed, which are general enough to allow for heteroskedastic and serially correlated errors, deterministic trends, and a structural break of unknown timing in both the intercept and slope.
Abstract: This article proposes Lagrange multiplier-based tests for the null hypothesis of no cointegration. The tests are general enough to allow for heteroskedastic and serially correlated errors, deterministic trends, and a structural break of unknown timing in both the intercept and slope. The limiting distributions of the test statistics are derived, and are found to be invariant not only with respect to the trend and structural break, but also with respect to the regressors. A small Monte Carlo study is also conducted to investigate the small-sample properties of the tests. The results reveal that the tests have small size distortions and good power relative to other tests.

Journal ArticleDOI
TL;DR: In this article, a factor generalized autoregressive conditional heteroscedasticity (GARCH) model is proposed and test procedures for checking the correctness of the number of factors are developed.
Abstract: We propose a factor generalized autoregressive conditional heteroscedasticity (GARCH) model and develop test procedures for checking the correctness of the number of factors. Maximum likelihood estimation of the model is straightforward once computationally simple preliminary estimates are obtained. Motivated by the empirical application of the article, a mixture of Gaussian distributions is considered in addition to the conventional Gaussian likelihood. Interestingly, some parameters of the conditional covariance matrix that are not identifiable under normality, can be identified when the mixture specification is used. As an empirical example, modeling a system of exchange rate returns and testing for volatility transmission is considered.

Journal ArticleDOI
TL;DR: The authors developed a new bivariate Markov regime switching BEKK-generalized autoregressive conditional Heteroscedasticity (GARCH) model and applied it to estimate time-varying minimum variance hedge ratios for corn and nickel spot and futures prices.
Abstract: This article develops a new bivariate Markov regime switching BEKK-Generalized Autoregressive Conditional Heteroscedasticity (GARCH) (RS-BEKK-GARCH) model. The model is a state-dependent bivariate BEKK-GARCH model and an extension of Gray's univariate generalized regime-switching (GRS) model to the bivariate case. To solve the path-dependency problem inherent in the bivariate regime switching BEKK-GARCH model, we propose a recombining method for the covariance term in the conditional variance-covariance matrix. The model is applied to estimate time-varying minimum variance hedge ratios for corn and nickel spot and futures prices. Out-of-sample point estimates of hedging portfolio variance show that compared to the state-independent BEKK-GARCH model, the RS-BEKK-GARCH model improves out-of-sample hedging effectiveness for both corn and nickel data. We perform White's (2000) data-snooping reality check to test for predictive superiority of RS-BEKK-GARCH over the benchmark model and find that the difference ...

Journal ArticleDOI
TL;DR: In this article, the asymptotic distribution of the quasi-maximum likelihood estimator for generalized autoregressive conditional heteroskedastic (GARCH) processes is established.

Posted Content
TL;DR: In this paper, generalized autoregressive conditional heteroscedasticity (GARCH) effects imply the probability of large losses is greater than standard mean-variance analysis suggests.
Abstract: Generalized autoregressive conditional heteroscedasticity (GARCH) effects imply the probability of large losses is greater than standard mean-variance analysis suggests. Accurately capturing GARCH for housing markets is vital for portfolio management. Previous investigations of GARCH in housing have focused on narrow regions or aggregated effects of GARCH across markets, imposing one nationwide effect. This paper tests fifty state housing markets for GARCH, and develops individual GARCH models for those states, allowing for different effects in each. Results indicate there are GARCH effects in over half the states, and the signs and magnitudes vary widely, highlighting the importance of estimating separate GARCH models for each market.

Journal ArticleDOI
TL;DR: In this article, an adaptive bandwidth selector for the errors-in-variables regression problem is proposed, and their finite-sample performance is illustrated through simulated and real data examples, showing that it is optimal.
Abstract: In the classical errors-in-variables problem, the goal is to estimate a regression curve from data in which the explanatory variable is measured with error. In this context, nonparametric methods have been proposed that rely on the assumption that the measurement errors are identically distributed. Although there are many situations in which this assumption is too restrictive, nonparametric estimators in the more realistic setting of heteroscedastic errors have not been studied in the literature. We propose an estimator of the regression function in such a setting and show that it is optimal. We give estimators in cases in which the error distributions are unknown and replicated observations are available. Practical methods, including an adaptive bandwidth selector for the errors-in-variables regression problem, are suggested, and their finite-sample performance is illustrated through simulated and real data examples.

Journal ArticleDOI
TL;DR: In this article, the authors considered binary response models where errors are uncorrelated with a set of instrumental variables and are independent of a continuous regressor v, conditional on all other variables.

Journal ArticleDOI
TL;DR: In this paper, two procedures based on the likelihood ratio test (LRT) and on a cumulative sums (cusum) statistic are considered and compared in a simulation study, and the authors conclude that for a single covariance change the cusum procedure is more powerful in small and medium samples, whereas the LRT test is more effective in large samples.

Journal ArticleDOI
TL;DR: In this article, the authors consider how to select the auxiliary distribution to implement the wild bootstrap for regressions featuring heteroscedasticity of unknown form and propose a new class of two-point distributions and suggest using the Kolmogorov-Smirnov statistic as a selection criterion.