scispace - formally typeset
Search or ask a question

Showing papers on "Heteroscedasticity published in 1998"



Journal ArticleDOI
TL;DR: In this paper, the asymptotic properties of quasi-maximum likelihood estimators for multivariate heteroskedastic models were studied, and conditions under which strong consistency can be obtained; unlike in the current literature, the assumptions on the existence of moments of the error term are weak, and no study of the various derivatives of the likelihood is required.
Abstract: This paper deals with the asymptotic properties of quasi-maximum likelihood estimators for multivariate heteroskedastic models. For a general model, we give conditions under which strong consistency can be obtained; unlike in the current literature, the assumptions on the existence of moments of the error term are weak, and no study of the various derivatives of the likelihood is required. Then, for a particular model, the multivariate GARCH model with constant correlation, we describe the set of parameters where these conditions hold.

469 citations


Journal ArticleDOI
TL;DR: In this article, the authors review the literature on stated preference elicitation methods and introduce the concept of testing data generation process invariance across SP and revealed preference (RP) choice data sources.

450 citations


Posted Content
TL;DR: Fan et al. as discussed by the authors proposed an adaptive method for estimating the conditional variance by applying a local linear regression to the squared residuals under the assumption that the observations are made from a strictly stationary and absolutely regular process.
Abstract: Author(s): Fan, Jianqing; Yao, Qiwei | Abstract: Conditional heteroscedasticity has been often used in modelling and understanding the variability of statistical data. Under a general setup which includes the nonlinear time series model as a special case, we propose an e cient and adaptive method for estimating the conditional variance. The basic idea is to apply a local linear regression to the squared residuals. We demonstrate that without knowing the regression function, we can estimate the conditional variance asymptotically as well as if the regression were given. This asymptotic result, established under the assumption that the observations are made from a strictly stationary and absolutely regular process, is also veri ed via simulation. Further, the asymptotic result paves the way for adapting an automatic bandwidth selection scheme. An application with nancial data illustrates the usefulness of the proposed techniques.

415 citations


Journal ArticleDOI
TL;DR: Fan et al. as mentioned in this paper proposed an adaptive method for estimating the conditional variance by applying a local linear regression to the squared residuals of a nonlinear time series model, which can be used to adapt an automatic bandwidth selection scheme.
Abstract: Author(s): Fan, Jianqing; Yao, Qiwei | Abstract: Conditional heteroscedasticity has been often used in modelling and understanding the variability of statistical data Under a general setup which includes the nonlinear time series model as a special case, we propose an e cient and adaptive method for estimating the conditional variance The basic idea is to apply a local linear regression to the squared residuals We demonstrate that without knowing the regression function, we can estimate the conditional variance asymptotically as well as if the regression were given This asymptotic result, established under the assumption that the observations are made from a strictly stationary and absolutely regular process, is also veri ed via simulation Further, the asymptotic result paves the way for adapting an automatic bandwidth selection scheme An application with nancial data illustrates the usefulness of the proposed techniques

404 citations


Journal ArticleDOI
TL;DR: This paper examined the conditional heteroscedasticity of the yen-dollar exchange rate and found that the appreciation and depreciation shocks of the Japanese yen against the dollar have similar effects on future volatilities.
Abstract: This paper examines the conditional heteroscedasticity of the yen–dollar exchange rate A model is constructed by extending the asymmetric power autoregressive conditional heteroscedasticity model to a process that is fractionally integrated It is found that, unlike the equity markets, the appreciation and depreciation shocks of the yen against the dollar have similar effects on future volatilities Although the results reject both the stable and the integrated models, our analysis of the response coefficients of the past shocks and the application of the models to the estimation of the capital requirements for trading the currencies show that there are no substantial differences between the fractionally integrated models and the stable models © 1998 John Wiley & Sons, Ltd

349 citations


BookDOI
01 Jan 1998
TL;DR: This chapter discusses Regression and Data Analysis, modelling Simultaneous Systems, and working with data: Cross Section and Time Series Analysis.
Abstract: Preface. Data Theory in Development Research. Variables and Their Distributions. Part I Regression and Data Analysis. Simple Regression with Graphics. Curve Fitting. Multiple Regression with Graphics. Misspecification with Multiple Regression. Part II Working with Data: Cross Section and Time Series Analysis. Cross Section Analysis: the Problem of Heteroscedasticity. Working with Categorical Variables. Modelling Time Series. Autocorrelation and Misspecification. Part III Modelling Simultaneous Systems. Introduction to Simultaneous Models. Identification and Estimation. Carrying out Econometric Research.

277 citations


Journal ArticleDOI
TL;DR: The MBB covariance estimator is shown to provide heteroskedasticity and autocorrelation consistent (HAC) standard errors for least squares (LS) and quantile regression (QR) coefficient estimators.

204 citations


Journal ArticleDOI
TL;DR: This article used the Gibbs sampling approach in the context of a three state Markov-switching model to show how heteroskedasticity affects inference and suggest two strategies for valid inference.

149 citations


Journal ArticleDOI
TL;DR: In this article, a simple root n. consistent, asymptotically normal semiparametric estimator of the coefficient vector,B in the latent variable specification y = L(f,B'x + e) is constructed.
Abstract: A simple root n. consistent, asymptotically normal semiparametric estimator of the coefficient vector ,B in the latent variable specification y = L(f,B'x + e) is constructed. The distribution of e is unknown and may be correlated with x or be conditionally heteroscedastic, e.g., x can contain measurement error. The function L can also be unknown. The identification assumption is that e is uncorrelated with instruments u and that the conditional distribution of e given x and u does not depend on one of the regressors, which has some special properties. Extensions to more general latent variable specifications are provided.

125 citations


Journal ArticleDOI
TL;DR: In this paper, a consistent test for heteroscedasticity is proposed in a nonparametric regression set-up, based on an estimator for the best L 2 -approximation of the variance function by a constant.
Abstract: The importance of being able to detect heteroscedasticity in regression is widely recognized because efficient inference for the regression function requires that heteroscedasticity is taken into account. In this paper a simple consistent test for heteroscedasticity is proposed in a nonparametric regression set-up. The test is based on an estimator for the best L 2 -approximation of the variance function by a constant. Under mild assumptions asymptotic normality of the corresponding test statistic is established even under arbitrary fixed alternatives. Confidence intervals are obtained for a corresponding measure of heteroscedasticity. The finite sample performance and robustness of these procedures are investigated in a simulation study and Box-type corrections are suggested for small sample sizes.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate several important inference issues for factor models with dynamic heteroskedasticity in the common factors and propose a consistent two-step estimation procedure which does not rely on knowledge of any factor estimates, and explain how to compute correct standard errors.
Abstract: We investigate several important inference issues for factor models with dynamic heteroskedasticity in the common factors. First, we show that such models are identified if we take into account the time-variation in the variances of the factors. Our results also apply to dynamic versions of the APT, dynamic factor models, and vector autoregressions. Secondly, we propose a consistent two-step estimation procedure which does not rely on knowledge of any factor estimates, and explain how to compute correct standard errors. Thirdly, we develop a simple preliminary LM test for the presence of ARCH effects in the common factors. Finally, we conduct a Monte Carlo analysis of the finite sample properties of the proposed estimators and hypothesis tests.

Posted Content
TL;DR: In this article, an alternative test statistic is presented and a better approximation to the test distribution is derived, based on simulation studies, for the unbalanced heteroscedastic, way random ANOVA model and for the probability difference method including interaction treatment by centres.
Abstract: In many fields of applications, test statistics are obtained by combining estimates from several experiments, studies or centres of a multicentre trial. The commonly used test procedure to judge the evidence of a common overall effect can result in a considerable overestimation of the significance level, leading to a high rate of too liberal decisions. An alternative test statistic is presented and a better approximating test distribution is derived. Explicitely discussed are the methods in the unbalanced heteroscedastic, way random ANOVA model and for the probability difference method including interaction treatment by centres. Numerical results are presented by simulation studies.

Journal ArticleDOI
TL;DR: In this paper, the maximum likelihood estimator (MLE) for unstable autoregressive moving-average (ARMA) time series with the noise sequence satisfying a general auto-gressive heteroscedastic (GARCH) process was investigated and it was shown that the MLE satisfying the likelihood equation exists and is consistent.
Abstract: This paper investigates the maximum likelihood estimator (MLE) for unstable autoregressive moving-average (ARMA) time series with the noise sequence satisfying a general autoregressive heteroscedastic (GARCH) process. Under some mild conditions, it is shown that the MLE satisfying the likelihood equation exists and is consistent. The limiting distribution of the MLE is derived in a unified manner for all types of characteristic roots on or outside the unit circle and is expressed as a functional of stochastic integrals in terms of Brownian motions. For various types of unit roots, the limiting distribution of the MLE does not depend on the parameters in the moving-average component and hence, when the GARCH innovations reduce to usual white noises with a constant conditional variance, they are the same as those for the least squares estimators (LSE) for unstable autoregressive models given by Chan and Wei (1988). In the presence of the GARCH innovations, the limiting distribution will involve a sequence of independent bivariate Brownian motions with correlated components. These results are different from those already known in the literature and, in this case, the MLE of unit roots will be much more efficient than the ordinary least squares estimation.

Dissertation
01 Apr 1998
TL;DR: In this article, the performance of generalised autoregressive conditional (GARCH) model and its modifications in forecasting stock market volatility are evaluated using the rate of returns from the daily stock market indices of Kuala Lumpur Stock Exchange (KLSE).
Abstract: The performance of generalised autoregressive conditional heteroscedasticity (GARCH) model and its modifications in forecasting stock market volatility are evaluated using the rate of returns from the daily stock market indices of Kuala Lumpur Stock Exchange (KLSE). These indices include Composite Index, Tins Index, Plantations Index, Properties Index and Finance Index. The models are stationary GARCH, unconstrained GARCH, non-negative GARCH, GARCH in mean (GARCH-M), exponential GARCH (EGARCH) and integrated GARCH. The parameters of these models and variance processes are estimated jointly using maximum likelihood method. The performance of the within-sample estimation is assessed using several goodness-of-fit statistics and the accuracy of the out-of-sample forecasts is judged using mean squared error.

Journal ArticleDOI
TL;DR: In this article, the authors compared the performance of the mean equality tests proposed by Alexander and Govern, Box, Brown and Forsythe, James, and Welch, as well as the analysis of variance F test, for their ability to limit the number of Type I errors and to detect true treatment group differences in one-way, completely randomized designs in which the underlying distributions were nonnormal, variances were nonhomogeneous, and groups sizes were unequal.
Abstract: Tests of mean equality proposed by Alexander and Govern, Box, Brown and Forsythe, James, and Welch, as well as the analysis of variance F test, were compared for their ability to limit the number of Type I errors and to detect true treatment group differences in one-way, completely randomized designs in which the underlying distributions were nonnormal, variances were nonhomogeneous, and groups sizes were unequal. These tests were compared when the usual method of least squares was applied to estimate group means and variances and when Yuen's trimmed means and Winsorized variances were adopted. Based on the variables examined in this investigation, which included number of treatment groups, degree of population skewness, nature of the pairing of variances and group sizes, and nonnull effects of varying sizes, we recommend that researchers use trimmed means and Winsorized variances with either the Alexander and Govern, James, or Welch tests to test for mean equality.

Journal ArticleDOI
TL;DR: In this paper, the authors compared the size performance of the generalized F-test and four other widely used procedures: the Classical F-Test for ANOVA, the weighted least-squares to adjust for heteroskedasticity, the Brown-Forsythe test, and the Welch test.
Abstract: Tsui and Weerahandi (1989) introduced the notion of generalized p-values and since then this idea is used to solve many statistical testing problems Heteroskedasticity is one of the major practical problems encountered in ANOVA problems To compare the means of several groups under heteroskedasticity approximate tests are used in the literature Weerahandi (1995a) introduced a test using the notion of generalized p-values for comparing the means of several populations when the variances are not equal This test is referred to as a generalized F-test In this paper we compare the size performance of the Generalized F-test and four other widely used procedures: the Classical F-test for ANOVA, the F-test obtained by the weighted least-squares to adjust for heteroskedasticity, the Brown-Forsythe-test, and the Welch-test The comparison is based on a simulation study of size performance of tests applied to the balanced one-way model The intended level of the tests is set at 005 While the Generalized F-test

Journal ArticleDOI
TL;DR: In this article, a vector conditional heteroscedastic autoregressive nonlinear (CHARN) model is proposed to estimate the conditional mean and the conditional variance (volatility) matrix of the past.

Journal ArticleDOI
TL;DR: A robust statistical technique known as the Jack-knife is combined with the EM algorithm to provide a robust ML training algorithm and an artificial-data case, the two-dimensional XOR problem, and a real- data case, success or failure prediction of UK private construction companies, are used to evaluate the performance of this robust learning algorithm.

Journal ArticleDOI
TL;DR: In this paper, generalized hurdle models suitable for the analysis of over-dispersed or under-distributed count data are considered. But the first stage allows for asymmetric departures from the binary logit model.

Posted Content
TL;DR: In this paper, the authors discuss and test several hypotheses to explain why willingness-to-pay estimates from OE and DC-CVM questions differ, and find that WTP estimates from discrete-choice data are very sensitive to assumptions made about the random utility.
Abstract: Most comparative studies find that the discrete-choice contingent valuation method (DC-CVM) yields higher willingness-to-pay (WTP) estimates than the open-ended (OE) format. In this paper, we discuss and test several hypotheses to explain why WTP estimates from OE and DC-CVM questions differ. We find that WTP estimates from discrete-choice data are very sensitive to assumptions made about the random utility. In particular, violation of the homoscedasticity assumption may lead to biased WTP estimates if the error terms are correlated with the cost. This violation was a main source of difference in WTP estimates in our studies.

Journal ArticleDOI
TL;DR: In this article, the authors investigate alternative unconditional and conditional distributional models for the returns on Japan's Nikkei 225 stock market index among them is the recently introduced class of ARMA-GARCH models driven by α-stable (or stable Paretian) distributed innovations, designed to capture the observed serial dependence, conditional heteroskedasticity and fat-tailedness present in the return data.
Abstract: We investigate alternative unconditional and conditional distributional models for the returns on Japan's Nikkei 225 stock market index Among them is the recently introduced class of ARMA-GARCH models driven by α-stable (or stable Paretian) distributed innovations, designed to capture the observed serial dependence, conditional heteroskedasticity and fat-tailedness present in the return data Of the eight entertained distributions, the partially asymmetric Weibull, Student's t and asymmetric α-stable present themselses as the most viable candidates in terms of overall fit However, the tails of the sample distribution are approximated best by the asymmetric α-stable distribution Good tail approximations are particularly important for risk assessments

Journal ArticleDOI
TL;DR: In this paper, a new specification test for detecting whether or not the error terms of a spatial regression model area are spatially correlated and/or heteroskedastic is proposed, which can be viewed as a test of model's specifications.

Journal ArticleDOI
TL;DR: This article applied a recursive version of the Brock-Dechert-Scheinkman statistic to daily data on two stock-market indexes between January 1980 and December 1990 and found that October 1987 is highly influential in the characterization of the stock market dynamics and appears to correspond to a shift in the distribution of stock returns.
Abstract: This article addresses the question of whether recent findings of nonlinearities in high-frequency financial time series have been contaminated by possible shifts in the distribution of the data. It applies a recursive version of the Brock–Dechert–Scheinkman statistic to daily data on two stock-market indexes between January 1980 and December 1990. It is shown that October 1987 is highly influential in the characterization of the stock-market dynamics and appears to correspond to a shift in the distribution of stock returns. Sampling experiments show that simple linear processes with shifts in variance can replicate the behavior of the tests, but autoregressive conditional heteroscedastic filters are unable to do so.

Journal ArticleDOI
TL;DR: Steady advances in available computing power have facilitated multiple-trait analyses involving continuous and discrete measures, and full Bayesian inference via the development of Markov Chain Monte Carlo methods will continue to allow even greater generality and dimensions in the genetic model.

Posted Content
Stephen Gray1
TL;DR: In this paper, a generalized regime-switching (GRS) model for short-term interest rate is proposed, which allows the short rate to exhibit both mean reversion and conditional heteroskedasticity and nests the GARCH and square root process specifications.
Abstract: This paper develops a generalized regime-switching (GRS) model of the short-term interest rate. The model allows the short rate to exhibit both mean reversion and conditional heteroskedasticity and nests the popular generalized autoregressive conditional heteroskedasticity (GARCH) and square root process specifications. Thus, the conditional variance process accommodates volatility clustering and dependence on the level of the interest rate. Switching between regimes is governed by a first-order Markov process with state-dependent transition probabilities. The GRS model is compared with various existing models of the short rate in terms of the statistical fit of short-term interest rate data and in terms of out-of-sample forecasting performance.

Journal ArticleDOI
TL;DR: The authors developed a test for the hypothesis that a series (observed in discrete time) is generated by a diffusion process based on an overidentifying relation between variance and kurtosis parameters that holds for generalized autoregressive conditional heteroscedastic diffusions.
Abstract: In this article we develop a test for the hypothesis that a series (observed in discrete time) is generated by a diffusion process. This test is based on an overidentifying relation between variance and kurtosis parameters that holds for generalized autoregressive conditional heteroscedastic diffusions. The proposed test is not specific to a particular data frequency and clearly indicates the presence of jumps in dollar exchange rates. To assess the size and intensity of the jumps, we estimate a model containing both jumps and conditional heteroscedasticity.

Journal ArticleDOI
TL;DR: The relationship among daily stock return autocorrelation, trading volume, and price limits is investigated in this paper, where the authors used OLS, generalized autoregressive conditional heteroscedasticity (GARCH) and generalized method of moment (GMM) to investigate the sensitivity of the estimation results.
Abstract: The relationship among daily stock return autocorrelation, trading volume, and price limits are investigated in this paper. Twenty-four Taiwan individual stocks are adopted here. We found that increasing the volume reduces the daily autocorrelation for nearly half of the stocks. This negative volume effect is contrary to the positive price-limit effect, which strengthens the autocorrelation. We use OLS, generalized autoregressive conditional heteroscedasticity (GARCH) and generalized method of moment (GMM) to investigate the sensitivity of the estimation results. Our results display robustness across estimation methods.

Posted Content
TL;DR: In this paper, a simple and easy to use method which corrects for frequency differentials and data gaps by updating the linear correlation coefficient calculation with the aid of covolatility weights is proposed.
Abstract: This study addresses three problematic issues concerning the application of the linear correlation coefficient in the high-frequency financial data domain. First, correlation of intra-day, equally spaced time series derived from unevenly spaced tick-by-tick data deserves careful treatment if a data bias resulting from the classical missing value problem is to be avoided. We propose a simple and easy to use method which corrects for frequency differentials and data gaps by updating the linear correlation coefficient calculation with the aid of covolatility weights. We view the method as a bi-variate alternative to time scale transformations which treat heteroscedasticity by expanding periods of higher volatility while contracting periods of lower volatility. Second, it is generally recognized that correlations between financial time series are unstable, and we probe the stability of correlation as a function of time for seven years of high-frequency foreign exchange rate, implied forward interest rate and stock index data. Correlations estimated over time in turn allow for estimations of the memory that correlations have for their past values. Third, previous authors have demonstrated a dramatic decrease in correlation as data frequency enters the intra-hour level (the "Epps effect"). We characterize the Epps effect for correlations between a number of financial time series and suggest a possible relation between correlation attenuation and activity rates.

Journal ArticleDOI
TL;DR: In this paper, the authors address the problem of constructing designs for regression models in the presence of both possible heteroscedasticity and an approximately and possibly incorrectly specified response function.
Abstract: This article addresses the problem of constructing designs for regression models in the presence of both possible heteroscedasticity and an approximately and possibly incorrectly specified response function. Working with very general models for both types of departure from the classical assumptions, I exhibit minimax designs and correspondingly optimal weights. Simulation studies and a case study accompanying the theoretical results lead to the conclusions that the robust designs yield substantial gains over some common competitors, in the presence of realistic departures that are sufficiently mild so as to be generally undetectable by common test procedures. Specifically, I exhibit solutions to the following problems: P1, for ordinary least squares, determine a design to minimize the maximum value of the integrated mean squared error (IMSE) of the fitted values, with the maximum being evaluated over both types of departure; P2, for weighted least squares, determine both weights and a design to m...