scispace - formally typeset
Search or ask a question

Showing papers on "Heteroscedasticity published in 2001"


Book
Ruey S. Tsay1
15 Oct 2001
TL;DR: The author explains how the Markov Chain Monte Carlo Methods with Applications and Principal Component Analysis and Factor Models changed the way that conventional Monte Carlo methods were applied to time series analysis.
Abstract: Preface. Preface to First Edition. 1. Financial Time Series and Their Characteristics. 2. Linear Time Series Analysis and Its Applications. 3. Conditional Heteroscedastic Models. 4. Nonlinear Models and Their Applications. 5. High-Frequency Data Analysis and Market Microstructure. 6. Continuous-Time Models and Their Applications. 7. Extreme Values, Quantile Estimation, and Value at Risk. 8. Multivariate Time Series Analysis and Its Applications. 9. Principal Component Analysis and Factor Models. 10. Multivariate Volatility Models and Their Applications. 11. State-Space Models and Kalman Filter. 12. Markov Chain Monte Carlo Methods with Applications. Index.

2,766 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined how well the alternative estimators behave econometrically in terms of bias and precision when the data are skewed or have other common data problems (heteroscedasticity, heavy tails).

1,854 citations


Journal ArticleDOI
TL;DR: The ARCH/GARCH model as mentioned in this paper assumes that the expected value of all error terms, when squared, is the same at any given point, and this assumption is called homoskedasticity.
Abstract: The great workhorse of applied econometrics is the least squares model. This is a natural choice, because applied econometricians are typically called upon to determine how much one variable will change in response to a change in some other variable. Increasingly however, econometricians are being asked to forecast and analyze the size of the errors of the model. In this case, the questions are about volatility, and the standard tools have become the ARCH/ GARCH models. The basic version of the least squares model assumes that the expected value of all error terms, when squared, is the same at any given point. This assumption is called homoskedasticity, and it is this assumption that is the focus of ARCH/ GARCH models. Data in which the variances of the error terms are not equal, in which the error terms may reasonably be expected to be larger for some points or ranges of the data than for others, are said to suffer from heteroskedasticity. The standard warning is that in the presence of heteroskedasticity, the regression coefficients for an ordinary least squares regression are still unbiased, but the standard errors and confidence intervals estimated by conventional procedures will be too narrow, giving a false sense of precision. Instead of considering this as a problem to be corrected, ARCH and GARCH models treat heteroskedasticity as a variance to be modeled. As a result, not only are the deficiencies of least squares corrected, but a prediction is computed for the variance of each error term. This prediction turns out often to be of interest, particularly in applications in finance. The warnings about heteroskedasticity have usually been applied only to cross-section models, not to time series models. For example, if one looked at the

1,167 citations


Journal ArticleDOI
TL;DR: In this paper, the authors define the regime-switching lognormal model and compare the fit of the model to the data with other common econometric models, including the generalized autoregressive conditionally heteroskedastic model.
Abstract: In this paper I first define the regime-switching lognormal model. Monthly data from the Standard and Poor’s 500 and the Toronto Stock Exchange 300 indices are used to fit the model parameters, using maximum likelihood estimation. The fit of the regime-switching model to the data is compared with other common econometric models, including the generalized autoregressive conditionally heteroskedastic model. The distribution function of the regime-switching model is derived. Prices of European options using the regime-switching model are derived and implied volatilities explored. Finally, an example of the application of the model to maturity guarantees under equity-linked insurance is presented. Equations for quantile and conditional tail expectation (Tail-VaR) risk measures are derived, and a numerical example compares the regime-switching lognormal model results with those using the more traditional lognormal stock return model.

455 citations


Journal ArticleDOI
TL;DR: The authors proposed a class of asymptotic N(0,1) tests for volatility spillover between two time series that exhibit conditional heteroskedasticity and may have infinite unconditional variances.

416 citations


Journal ArticleDOI
TL;DR: In this article, a simple resampling method by perturbing the objective function repeatedly was proposed to estimate the covariance matrix of the estimator of a vector of parameters of interest, which can then be made based on a large collection of the resulting optimisers.
Abstract: Suppose that under a semiparametric setting an estimator of a vector of parameters of interest is obtained by optimising an objective function which has a U-process structure. The covariance matrix of the estimator is generally a function of the underlying density function, which may be difficult to estimate well by conventional methods. In this paper, we present a simple resampling method by perturbing the objective function repeatedly. Inferences of the parameters can then be made based on a large collection of the resulting optimisers. We illustrate our proposal by three examples with a heteroscedastic regression model.

264 citations


Journal ArticleDOI
TL;DR: The best-fit method for the estimation of low-effect concentrations is validated by a simulation study, and its applicability is demonstrated with toxicity data for 64 chemicals tested in an algal and a bacterial bioassay, where a clear improvement is achieved.
Abstract: Risk assessments of toxic chemicals currently rely heavily on the use of no-observed-effect concentrations (NOECs). Due to several crucial flaws in this concept, however, discussion of replacing NOECs with statistically estimated low-effect concentrations continues. This paper describes a general best-fit method for the estimation of effects and effect concentrations by the use of a pool of 10 different sigmoidal regression functions for continuous toxicity data. Due to heterogeneous variabilities in replicated data (i.e., heteroscedasticity), the concept of generalized least squares is used for the estimation of the model parameters, whereas a nonparametric variance model based on smoothing spline functions is used to describe the heteroscedasticity. To protect the estimates against outliers, the generalized least-squares method is improved by winsorization. On the basis of statistical selection criteria, the best-fit model is chosen individually for each set of data. Furthermore, the bootstrap methodology is applied for constructing confidence intervals for the estimated effect concentrations. The best-fit method for the estimation of low-effect concentrations is validated by a simulation study, and its applicability is demonstrated with toxicity data for 64 chemicals tested in an algal and a bacterial bioassay. In comparison with common methods of concentration-response analysis, a clear improvement is achieved.

258 citations


Journal ArticleDOI
TL;DR: The authors investigate the effects of dynamic heteroskedasticity on statistical factor analysis and show that identification problems are alleviated when variation in factor variances is accounted for. But their results apply to dynamic APT models and other structural models.

188 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the local robustness properties of generalized method of moments (GMM) estimators and of a broad class of GMM-based tests in a unified framework.

180 citations


Journal ArticleDOI
TL;DR: The MAR-ARCH models appear to capture features of the data better than the competing models and are applied to two real datasets and compared to other competing models.
Abstract: We propose a mixture autoregressive conditional heteroscedastic (MAR-ARCH) model for modeling nonlinear time series. The models consist of a mixture of K autoregressive components with autoregressive conditional heteroscedasticity; that is, the conditional mean of the process variable follows a mixture AR (MAR) process, whereas the conditional variance of the process variable follows a mixture ARCH process. In addition to the advantage of better description of the conditional distributions from the MAR model, the MARARCH model allows a more flexible squared autocorrelation structure. The stationarity conditions, autocorrelation function, and squared autocorrelation function are derived. Construction of multiple step predictive distributions is discussed. The estimation can be easily done through a simple EM algorithm, and the model selection problem is addressed. The shape-changing feature of the conditional distributions makes these models capable of modeling time series with multimodal conditional distr...

168 citations


Journal ArticleDOI
TL;DR: The properties of the traditional two-sample t test, a modified t test allowing unequal variance, and the WMW test by stochastic simulation are compared and all show acceptable behaviour when the two distributions have similar variance.

Journal ArticleDOI
TL;DR: In this paper, a model based on the Kalman filter framework was proposed to test if an emerging stock market becomes more efficient over time and more integrated with other already established markets in situations in which no macroeconomic conditioning variables are available.
Abstract: This article introduces a model, based on the Kalman-filter framework, that allows for time-varying parameters, latent factors, and a general generalized autoregressive conditional heteroscedasticity (GARCH) structure for the residuals. With this extension of the Bekaert and Harvey model, it is possible to test if an emerging stock market becomes more efficient over time and more integrated with other already established markets in situations in which no macroeconomic conditioning variables are available. We apply this model to the Czech, Polish, Hungarian, and Russian stock markets. We use data at daily frequency running from April 7, 1994, to July 10, 1997. A latent factor captures macroeconomic expectations. Concerning predictability, measured with time-varying autocorrelations, Hungary reached efficiency before 1994. Russia shows signs of ongoing convergence toward efficiency. For Poland and the Czech Republic, we find no improvements. With regard to market integration, there is evidence that the impo...

Journal ArticleDOI
TL;DR: In this paper, three methods using nonparametric estimators of the regression function are discussed for testing the equality of k regression curves from independent samples, including linear combination of estimators for the integrated variance function in the individual samples and in the combined sample.
Abstract: In the problem of testing the equality of k regression curves from independent samples, we discuss three methods using nonparametric estimators of the regression function. The first test is based on a linear combination of estimators for the integrated variance function in the individual samples and in the combined sample. The second approach transfers the classical one-way analysis of variance to the situation of comparing non-parametric curves, while the third test compares the differences between the estimates of the individual regression functions by means of an L 2 -distance. We prove asymptotic normality of all considered statistics under the null hypothesis and local and fixed alternatives with different rates corresponding to the various cases. Additionally, consistency of a wild bootstrap version of the tests is established. In contrast to most of the procedures proposed in the literature, the methods introduced in this paper are also applicable in the case of different design points in each sample and heteroscedastic errors. A simulation study is conducted to investigate the finite sample properties of the new tests and a comparison with recently proposed and related procedures is performed.

Journal ArticleDOI
Jushan Bai1, Serena Ng1
TL;DR: In this paper, a procedure for testing conditional symmetry is proposed, which does not require the data to be stationary or i.i.d., and the dimension of the conditional variables can be infinite.

Journal ArticleDOI
TL;DR: Estimation of the effect size parameter, D, the standardized difference between population means, is sensitive to heterogeneity of variance (heteroscedasticity), which seems to abound in psychological data, and various proposed solutions are reviewed, including measures that do not make these assumptions.
Abstract: Estimation of the effect size parameter, D, the standardized difference between population means, is sensitive to heterogeneity of variance (heteroscedasticity), which seems to abound in psychological data. Pooling s2s assumes homoscedasticity, as do methods for constructing a confidence interval for D, estimating D from t or analysis of variance results, formulas that adjust estimates for inflation by main effects or covariates, and the Q statistic. The common language effect size statistic as an estimate of Pr(X1 > X2), the probability that a randomly sampled member of Population 1 will outscore a randomly sampled member of Population 2, also assumes normality and homoscedasticity. Various proposed solutions are reviewed, including measures that do not make these assumptions, such as the probability of superiority estimate of Pr(X1 > X2). Ways to reconceptualize effect size when treatments may affect moments such as the variance are also discussed.

Journal ArticleDOI
TL;DR: In this paper, a generalized autoregressive conditionally heteroskedastic (GARCH) equation is considered, where the coefficients depend on the state of a nonobserved Markov chain.
Abstract: We consider a generalized autoregressive conditionally heteroskedastic (GARCH) equation where the coefficients depend on the state of a nonobserved Markov chain. Necessary and sufficient conditions ensuring the existence of a stationary solution are given. In the case of ARCH regimes, the maximum likelihood estimates are shown to be consistent. The identification problem is also considered. This is illustrated by means of real and simulated data sets.

Journal ArticleDOI
Marc Henry1
TL;DR: In this article, the authors consider mean squared error minimizing bandwidths proposed in the literature for the local Whittle, the averaged periodogram and the log periodogram estimates of long memory, and assess their robustness to conditional heteroscedasticity of general form in the errors.
Abstract: The choice of bandwidth, or number of harmonic frequencies, is crucial to semiparametric estimation of long memory in a covariance stationary time series as it determines the rate of convergence of the estimate, and a suitable choice can insure robustness to some non-standard error specifications, such as (possibly long-memory) conditional heteroscedasticity. This paper considers mean squared error minimizing bandwidths proposed in the literature for the local Whittle, the averaged periodogram and the log periodogram estimates of long memory. Robustness of these optimal bandwidth formulae to conditional heteroscedasticity of general form in the errors is considered. Feasible approximations to the optimal bandwidths are assessed in an extensive Monte Carlo study that provides a good basis for comparison of the above-mentioned estimates with automatic bandwidth selection.

Journal ArticleDOI
TL;DR: This paper examines case studies from three different areas of insurance practice: health care, workers’ compensation, and group term life, and exploits tools developed in connection with panel data models for credibility rate-making purposes.
Abstract: In this paper, we examine case studies from three different areas of insurance practice: health care, workers’ compensation, and group term life. These different case studies illustrate how the broad class of panel data models can be applied to different functional areas and to data that have different features. Panel data, also known as longitudinal data, models are regression-type models that have been developed extensively in the biological and economic sciences. The data features that we discuss include heteroscedasticity, random and fixed effect covariates, outliers, serial correlation, and limited dependent variable bias. We demonstrate the process of identifying these features using graphical and numerical diagnostic tools from standard statistical software. Our motivation for examining these cases comes from credibility rate making, a technique for pricing certain types of health care, property and casualty, workers’ compensation, and group life coverages. It has been a part of actuarial ...

Journal ArticleDOI
TL;DR: In this paper, several kernel-based consistent tests of additivity in nonparametric regression have been developed for discrete covariates and parameters estimated from a semiparametric GMM criterion function.

Journal ArticleDOI
TL;DR: In this paper, Muth's (1961) rational expectations model of commodity markets implies that inventory carryover creates ARCH processes in prices, and the model also indicates that the expected price variance is an explanatory variable in price regressions.
Abstract: Muth's (1961) rational expectations model of commodity markets implies that inventory carryover creates ARCH processes in prices. The model also indicates that the expected price variance is an explanatory variable in price regressions. Hypotheses were tested on price data of twenty commodities using a variation of Engle et al. (1987) ARCH–M technique. An ARCH process was found in storable and not in non-storable commodity data, as expected. However, changes in expected price variance have no significant impact on price. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce a new concept of contemporaneous asymmetry in conditional heteroskedasticity models and propose an original class of models aimed to capture the leverage effect, contemporary asymmetry as well as timevarying skewness and kurtosis.

Journal ArticleDOI
TL;DR: In this paper, the authors develop test statistics to test hypotheses in nonlinear weighted regression models with serial correlation or conditional heteroscedasticity of unknown form, and derive the limiting null distributions of these new tests in a general nonlinear setting, and show that the distributions depend only on the number of restrictions being tested.
Abstract: We develop test statistics to test hypotheses in nonlinear weighted regression models with serial correlation or conditional heteroscedasticity of unknown form. The novel aspect is that these tests are simple and do not require the use of heteroscedasticity autocorrelationconsistent (HAC) covariance matrix estimators. This new class of tests uses stochastic transformations to eliminate nuisance parameters as a substitute for consistently estimating the nuisance parameters. We derive the limiting null distributions of these new tests in a general nonlinear setting, and show that although the tests have nonstandard distributions, the distributions depend only on the number of restrictions being tested. We perform some simulations on a simple model and apply the new method of testing to an empirical example and illustrate that the size of the new test is less distorted than tests using HAC covariance matrix estimators.

Journal ArticleDOI
TL;DR: In this article, the authors examined the ability of several models to generate optimal hedge ratios, including univariate and multivariate generalized autoregressive conditionally heteroscedastic (GARCH) models, and exponentially weighted and simple moving averages.
Abstract: This article examines the ability of several models to generate optimal hedge ratios. Statistical models employed include univariate and multivariate generalized autoregressive conditionally heteroscedastic (GARCH) models, and exponentially weighted and simple moving averages. The variances of the hedged portfolios derived using these hedge ratios are compared with those based on market expectations implied by the prices of traded options. One-month and three-month hedging horizons are considered for four currency pairs. Overall, it has been found that an exponentially weighted moving-average model leads to lower portfolio variances than any of the GARCH-based, implied or time-invariant approaches.

Journal ArticleDOI
TL;DR: In this paper, a two-stage approach is adopted to estimate the relevant parameters, in the first stage the conditional quantile function is estimated nonparametrically by the local polynomial estimator discussed in Chaudhuri (1991a) 246-269; Annals of Statistics 19 (1991b) 760-777) and Cavanagh (1996, Preprint).

Journal ArticleDOI
TL;DR: In this paper, a partially nonstationary multivariate autoregressive model was investigated and the asymptotic distributions of three estimators, including the least squares estimator, a full-rank maximum likelihood estimator and a reduced-rank estimator were derived.
Abstract: This paper investigates a partially nonstationary multivariate autoregressive model, which allows its innovations to be generated by a multivariate ARCH, autoregressive conditional heteroscedastic, process. Three estimators, including the least squares estimator, a full-rank maximum likelihood estimator and a reduced-rank maximum likelihood estimator, are considered and their asymptotic distributions are derived. When the multivariate ARCH process reduces to the innovation with a constant covariance matrix, these asymptotic distributions are the same as those given by Ahn & Reinsel (1990). However, in the presence of multivariate ARCH innovations, the asymptotic distributions of the full-rank maximum likelihood estimator and the reduced-rank maximum likelihood estimator involve two correlated multivariate Brownian motions, which are different from those given by Ahn & Reinsel (1990). Simulation results show that the full-rank and reduced-rank maximum likelihood estimator are more efficient than the least squares estimator. An empirical example shows that the two features of multivariate conditional heteroscedasticity and partial nonstationarity may be present simultaneously in a multivariate time series.

Journal ArticleDOI
Fushing Hsieh1
TL;DR: In this article, a class of non-proportional hazards regression models is considered to have hazard specifications consisting of a power form of cross-effects on the base-line hazard function.
Abstract: Summary. A class of non-proportional hazards regression models is considered to have hazard specifications consisting of a power form of cross-effects on the base-line hazard function. The primary goal of these models is to deal with settings in which heterogeneous distribution shapes of survival times may be present in populations characterized by some observable covariates. Although effects of such heterogeneity can be explicitly seen through crossing cumulative hazards phenomena in k-sample problems, they are barely visible in a one-sample regression setting. Hence, heterogeneity of this kind may not be noticed and, more importantly, may result in severely misleading inference. This is because the partial likelihood approach cannot eliminate the unknown cumulative base-line hazard functions in this setting. For coherent statistical inferences, a system of martingale processes is taken as a basis with which, together with the method of sieves, an overidentified estimating equation approach is proposed. A Pearson X2 type of goodness-of-fit testing statistic is derived as a by-product. An example with data on gastric cancer patients' survival times is analysed.

Journal ArticleDOI
TL;DR: This paper investigated the dependence of option prices on autoregressive dynamics under stylized facts of stock returns, i.e. conditional heteroskedasticity, leverage effect, and conditional leptokurtosis.

Journal ArticleDOI
TL;DR: In this article, the covariance matrix of ordinary least squares estimates in a linear regression model when heteroskedasticity is suspected is estimated and Monte Carlo simulation on the white estimator and its variants is performed.
Abstract: This paper considers the issue of estimating the covariance matrix of ordinary least squares estimates in a linear regression model when heteroskedasticity is suspected. We perform Monte Carlo simulation on the White estimator, which is commonly used in. empirical research, and also on some alternatives based on different bootstrapping schemes. Our results reveal that the White estimator can be considerably biased when the sample size is not very large, that bias correction via bootstrap does not work well, and that the weighted bootstrap estimators tend to display smaller biases than the White estimator and its variants, under both homoskedasticity and heteroskedasticity. Our results also reveal that the presence of (potentially) influential observations in the design matrix plays an important role in the finite-sample performance of the heteroskedasticity-consistent estimators.

Journal ArticleDOI
TL;DR: In this paper, the authors compared the results of the LN98 and SV99 estimators with the estimators of the estimator of the same general form of the variance coefficients of the generalized linear model.
Abstract: Exact REML for heteroscedastic linear models is compared with a number of approximate REML methods which have been proposed in the literature, especially with the methods proposed by Lee and Nelder (LN98) and Smyth and Verbyla (SV99) for simultaneous mean-dispersion modelling in generalized linear models. It is shown that neither of the LN98 or SV99 methods reduces to REML in the normal linear case. Asymptotic variances and efficiencies are obtained for these and other estimators of the same general form. A new algorithm is suggested, similar to one suggested by Huele et al., which returns the correct REML estimators and an improved approximation to the standard errors. It is possible to obtain REML estimators by alternating between two generalized linear models but the final fitted generalized linear model objects will not return the correct standard errors for the variance coefficients. The true REML likelihood calculations therefore fit only partially into the double generalized linear model framework.

Journal ArticleDOI
TL;DR: In this paper, a two-equation model is proposed to model household behavior with the data from dichotomous choice contingent valuation (DCCV) surveys, in which a decision on whether to participate in having a WTP and the WTP amount conditional on deciding to participate, and two separate stochastic processes that determine the probability and conditional level of WTP are featured.
Abstract: Modelling household behaviour with the data from dichotomous choice contingent valuation (DCCV) surveys is often complicated by zero willingness to pay (WTP) responses in the sample. To deal with the zero responses, a two-equation model is considered, which incorporates a two-level decision structure, a decision on whether to participate in having WTP and a decision on the WTP amount conditional on deciding to participate, and two separate stochastic processes that determine the probability and conditional level of WTP are featured. The model is empirically applied to household survey data, in which the DCCV questions concerned the benefits of air quality improvement in Korea. To put the issue of the two-equation model in perspective, this paper also experiments with economic and econometric specifications: utility theoretic restriction and heteroscedasticity. It is shown how failure to allow for the restriction distorts aggregate benefit estimates and the existence of heteroscedasticity in error term is ...