scispace - formally typeset
Search or ask a question

Showing papers on "Heteroscedasticity published in 2005"


Journal ArticleDOI
TL;DR: In this article, the authors compare the performance of existing methods and some new models for predicting value-at-risk (VaR) in a univariate context using more than 30 years of the daily return data on the NASDAQ Composite Index.
Abstract: Giventhegrowingneedformanagingfinancialrisk,riskpredictionplaysanincreasing roleinbankingandfinance.Inthisstudywecomparetheout-of-sampleperformance of existing methods and some new models for predicting value-at-risk (VaR) in a univariate context. Using more than 30 years of the daily return data on the NASDAQ Composite Index, we find that most approaches perform inadequately, although several models are acceptable under current regulatory assessment rules for model adequacy. A hybrid method, combining a heavy-tailed generalized autoregressive conditionally heteroskedastic (GARCH) filter with an extreme value theory-based approach, performs best overall, closely followed by a variant on a filtered historical simulation, and a new model based on heteroskedastic mixture distributions. Conditional autoregressive VaR (CAViaR) models perform inadequately, though an extension to a particular CAViaR model is shown to outperform the others.

635 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that the most common approach, weighted least squares, will usually lead to inefficient estimates and underestimated standard errors, and they also suggest two simple alternative FGLS approaches that are more efficient and yield consistent standard error estimates.
Abstract: Researchers often use as dependent variables quantities estimated from auxiliary data sets. Estimated dependent variable (EDV) models arise, for example, in studies where counties or states are the units of analysis and the dependent variable is an estimated mean, proportion, or regression coefficient. Scholars fitting EDV models have generally recognized that variation in the sampling variance of the observations on the dependent variable will induce heteroscedasticity. We show that the most common approach to this problem, weighted least squares, will usually lead to inefficient estimates and underestimated standard errors. In many cases, OLS with White’s or Efron heteroscedastic consistent standard errors yields better results. We also suggest two simple alternative FGLS approaches that are more efficient and yield consistent standard error estimates. Finally, we apply the various alternative estimators to a replication of Cohen’s (2004) cross-national study of presidential approval.

515 citations


Book
01 Sep 2005
TL;DR: The Structure of Economic Data Working with Data: Basic Data Handling as discussed by the authors The structure of economic data working with data, basic data handling, and basic data processing, is discussed in Section 2.1.
Abstract: Introduction PART ONE: STATISTICAL BACKGROUND AND BASIC DATA HANDLING The Structure of Economic Data Working With Data: Basic Data Handling PART TWO: THE CLASSICAL LINEAR REGRESSION MODEL Simple Regression Multiple Regression PART THREE: VIOLATING THE ASSUMPTIONS OF THE CLRM Multicollinearity Heteroskedasticity Autocorrelation Misspecification: Wrong Regressors, Measurement Errors and Wrong Functional Forms PART FOUR: TOPICS IN ECONOMETRICS Dummy Variables Dynamic Econometric Models Simultaneous Equation Models PART FIVE: TIME SERIES ECONOMETRICS ARIMA Models and the Box-Jenkins Methodology Modelling the Variance: ARCH-GARCH Models Vector Autoregressive (VAR) Models and Causality Tests Non Stationarity and Unit-Root Tests Cointegration and Error-Correction Models PART SIX: PANEL DATA ECONOMETRICS Traditional Panel Data Models Dynamic Heterogeneous Panels Non-Stationary Panels Practicalities in Using EViews and Microfit

495 citations


Journal ArticleDOI
TL;DR: In this paper, a flexible, intuitive and semiparametric estimator of distribution functions in the presence of covariates is proposed, and the conditional distribution is integrated over the range of the covariates to obtain an estimate of the unconditional distribution.

484 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method to introduce skewness in multivariate symmetric distributions, which leads to a "multivariate skew-student" density in which each marginal has a specific asymmetry coefficient.
Abstract: We propose a practical and flexible method to introduce skewness in multivariate symmetric distributions. Applying this procedure to the multivariate Student density leads to a “multivariate skew-Student” density in which each marginal has a specific asymmetry coefficient. Combined with a multivariate generalized autoregressive conditional heteroscedasticity model, this new family of distributions is found to be more useful than its symmetric counterpart for modeling stock returns and especially for forecasting the value-at-risk of portfolios.

290 citations


Journal ArticleDOI
TL;DR: A comparison of 93 studies that conducted tests of volatility-forecasting methods on a wide range of financial asset returns was conducted by as mentioned in this paper, who found that option-implied volatility provides more accurate forecasts than time-series models.
Abstract: A comparison is presented of 93 studies that conducted tests of volatility-forecasting methods on a wide range of financial asset returns. The survey found that option-implied volatility provides more accurate forecasts than time-series models. Among the time-series models, no model is a clear winner, although a possible ranking is as follows: historical volatility, generalized autoregressive conditional heteroscedasticity, and stochastic volatility. The survey produced some practical suggestions for volatility forecasting.

235 citations


Journal ArticleDOI
TL;DR: In this article, the gravity equation for trade was used to provide new estimates of the Jensen inequality in the presence of heteroskedasticity, and they found significant differences between estimates obtained with the proposed estimator and those obtained using the traditional method.
Abstract: Although economists have long been aware of Jensen's inequality, many econometric applications have neglected an important implication of it: the standard practice of interpreting the parameters of log-linearized models estimated by ordinary least squares as elasticities can be highly misleading in the presence of heteroskedasticity. This paper explains why this problem arises and proposes an appropriate estimator. Our criticism of conventional practices and the solution we propose extends to a broad range of economic applications where the equation under study is log-linearized. We develop the argument using one particular illustration, the gravity equation for trade, and apply the proposed technique to provide new estimates of this equation. We find significant differences between estimates obtained with the proposed estimator and those obtained with the traditional method.

221 citations


Proceedings ArticleDOI
07 Aug 2005
TL;DR: An algorithm to estimate simultaneously both mean and variance of a non parametric regression problem which can be solved via Newton's method is presented and is able to estimate variance locally unlike standard Gaussian Process regression or SVMs.
Abstract: This paper presents an algorithm to estimate simultaneously both mean and variance of a non parametric regression problem. The key point is that we are able to estimate variance locally unlike standard Gaussian Process regression or SVMs. This means that our estimator adapts to the local noise. The problem is cast in the setting of maximum a posteriori estimation in exponential families. Unlike previous work, we obtain a convex optimization problem which can be solved via Newton's method.

194 citations


Journal ArticleDOI
14 Jan 2005
TL;DR: In this article, the authors consider spatial lags of certain independent variables, as well as of the dependent variable, and consider spatial correlation of the error terms, general patterns of heteroscedasticity and of time series autocorrelation, and systems problems.
Abstract: In recent years researchers have considered a variety of regional models relating to infrastructure productivity. These models are often based upon overly simple econometric specifications and are typically formulated as if spatial interactions are absent. In this paper, we try to account for some of these shortcomings. We do this by considering spatial lags of certain independent variables, as well as of the dependent variable. We also consider spatial correlation of the error terms, general patterns of heteroscedasticity and of time series autocorrelation, and systems problems. Our results strongly suggest that regional infrastructure productivity involves spatial spillovers relating to both observable variables and error terms. They also suggest that corresponding coefficient estimates are very sensitive to model specifications.

185 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived the asymptotic covariance matrix of a vector of autocorrelations for residuals of ARMA models under weak assumptions on the noise, and a consistent estimator of, and a modification of the portmanteau tests are proposed.
Abstract: We consider tests for lack of fit in ARMA models with nonindependent innovations. In this framework, the standard Box–Pierce and Ljung–Box portmanteau tests can perform poorly. Specifically, the usual text book formulas for asymptotic distributions are based on strong assumptions and should not be applied without careful consideration. In this article we derive the asymptotic covariance matrix of a vector of autocorrelations for residuals of ARMA models under weak assumptions on the noise. The asymptotic distribution of the portmanteau statistics follows. A consistent estimator of , and a modification of the portmanteau tests are proposed. This allows us to construct valid asymptotic significance limits for the residual autocorrelations, and (asymptotically) valid goodness-of-fit tests, when the underlying noise process is assumed to be noncorrelated rather than independent or a martingale difference. A set of Monte Carlo experiments, and an application to the Standard & Poor 500 returns, illustrate the p...

175 citations


Journal ArticleDOI
TL;DR: In this article, a new model for autoregressive conditional heteroscedasticity and kurtosis is proposed, which uses only the standard Student's t-density and can be estimated simply using maximum likelihood.
Abstract: This article proposes a new model for autoregressive conditional heteroscedasticity and kurtosis. Via a time-varying degrees of freedom parameter, the conditional variance and conditional kurtosis are permitted to evolve separately. The model uses only the standard Student’s t-density and consequently can be estimated simply using maximum likelihood. The method is applied to a set of four daily financial asset return series comprising U.S. and U.K. stocks and bonds, and significant evidence in favor of the presence of autoregressive conditional kurtosis is observed. Various extensions to the basic model are proposed, and we show that the response of kurtosis to good and bad news is not significantly asymmetric.

Journal ArticleDOI
TL;DR: In this paper, a theory of quantile regression in the tails was developed, which obtains the large sample properties of extremal (extreme order and intermediate order) quantile estimators for the linear quantile regressions with the tails restricted to the domain of minimum attraction.
Abstract: Quantile regression is an important tool for estimation of conditional quantiles of a response Y given a vector of covariates X. It can be used to measure the effect of covariates not only in the center of a distribution, but also in the upper and lower tails. This paper develops a theory of quantile regression in the tails. Specifically, it obtains the large sample properties of extremal (extreme order and intermediate order) quantile regression estimators for the linear quantile regression model with the tails restricted to the domain of minimum attraction and closed under tail equivalence across regressor values. This modeling setup combines restrictions of extreme value theory with leading homoscedastic and heteroscedastic linear specifications of regression analysis. In large samples, extreme order regression quantiles converge weakly to \argmin functionals of stochastic integrals of Poisson processes that depend on regressors, while intermediate regression quantiles and their functionals converge to normal vectors with variance matrices dependent on the tail parameters and the regressor design.

Report SeriesDOI
TL;DR: In this paper, a control function estimator is proposed to adjust for endogeneity in the triangular simultaneous equations model where there are no available exclusion restrictions to generate suitable instruments, but the form of the error dependence on the exogenous variables is not parametrically specified.

Book ChapterDOI
TL;DR: In this paper, a theory of quantile regression in the tails was developed, which obtains the large sample properties of extremal (extreme order and intermediate order) quantile estimators for the linear quantile regressions with the tails restricted to the domain of minimum attraction.
Abstract: Quantile regression is an important tool for estimation of conditional quantiles of a response Y given a vector of covariates X. It can be used to measure the effect of covariates not only in the center of a distribution, but also in the upper and lower tails. This paper develops a theory of quantile regression in the tails. Specifically, it obtains the large sample properties of extremal (extreme order and intermediate order) quantile regression estimators for the linear quantile regression model with the tails restricted to the domain of minimum attraction and closed under tail equivalence across regressor values. This modeling setup combines restrictions of extreme value theory with leading homoscedastic and heteroscedastic linear specifications of regression analysis. In large samples, extreme order regression quantiles converge weakly to arg min functionals of stochastic integrals of Poisson processes that depend on regressors, while intermediate regression quantiles and their functionals converge to normal vectors with variance matrices dependent on the tail parameters and the regressor design.

Journal ArticleDOI
TL;DR: In this article, the effects of permanent changes in the variance of the errors of an autoregressive process on unit root tests were investigated, and it was shown that non-constant variances can both inflate and deflate the rejection frequency of the commonly used unit root test, both under the null and under the alternative.
Abstract: The paper provides a general framework for investigating the effects of permanent changes in the variance of the errors of an autoregressive process on unit root tests. Such a framework – which is based on a novel asymptotic theory for integrated and near integrated processes with heteroskedastic errors – allows to evaluate how the variance dynamics affect the size and the power function of unit root tests. Contrary to previous studies, it is shown that non-constant variances can both inflate and deflate the rejection frequency of the commonly used unit root tests, both under the null and under the alternative, with early negative and late positive variance changes having the strongest impact on size and power. It is also shown that shifts smoothed across the sample have smaller impacts than shifts occurring as a single abrupt jump, while periodic variances have a negligible effect even when a small number of cycles take place over a given sample. Finally, it is proved that the locally best invar...

Book
01 Jan 2005
TL;DR: In this paper, the authors present a model for estimating the least square estimator of the output and production costs of an electricity generation system, based on the Solow-Swan model.
Abstract: 1. ECONOMIC DATA AND ECONOMIC MODELS. Economic Data. Economic Models. Descriptive Statistics Versus Statistical Inference. 2. STATISTICAL INFERENCE. Populations, Samples and Parameters. Statistics and Sampling Distributions. Properties of Estimators. Derivation of Estimators. Hypothesis Testing. Further Topics in Hypothesis Testing. Inference is Conditional on the Model. Econometrics and Statistics. Statistical Methodology and the Philosophy of Science. 3. RELATIONSHIPS BETWEEN VARIABLES. Covariance and Correlation. Regression. Deviation Form Notation. Conclusions. 4. SIMPLE REGRESSION. Model Specification. Least Squares Estimation. Sampling Properties of the Least Squares Estimators. The Sampling Distributions of a and B. Hypothesis Testing. Decomposition of Sample Variation. Presentation of Regression Results. Scaling and Units of Measure. Sampling, Numerical, and Invariance Properties. Application: Output and Production Costs. 5. SUPPLEMENTARY TOPICS IN REGRESSION. Forecasting. Regression Through the Origin. When Regression Goes Wrong. 6. MATTERS OF FUNCTIONAL FORM. Loglinear Models. Log-Lin Models. Lin-Log Models. Reciprocal Models. Application: Engel Curves. Conclusions. 7. APPLICATIONS TO PRODUCTION FUNCTIONS. General Features of Production Functions. The Cobb-Douglas Production Function. Technical Change. Testing Marginal Productivity Conditions. Conclusions. 8.MULTIPLE REGRESSION. Model Specification. Least Squares Estimation. Properties of Least Squares Estimators. Hypothesis Testing. Decomposition of a Sample Variation. Application: Electricity Demand. Multicollinearity. Application: the Quadratic Cost Function. Model Misspecification. Pre-Test Estimation. 9.APPLICATION TO ECONOMIC GROWTH. Introduction. The Textbook Solow-Swan Model. Human Capital in the Solow-Swan Model. Summary: Mankiw, Romer, and Weil in a Nutshell. Conclusions. 10. DUMMY VARIABLES AND RESTRICTED COEFFICIENTS. Dummy Variables. Restricted Coefficients. Identification. 11. APPLICATIONS TO COST FUNCTIONS. The Cost Function. Deriving the Cost Function. Using the Cost Function. Returns to Scale in Electricity Generation. The Translog Cost Function. Consumer Demand. Further Reading. 12. MODEL DISCOVERY. Data Mining. Specification Testing. Non-nested Testing. Model Choice. Should the Equation Be Part of a System? Conclusions. 13. NONLINEAR REGRESSION. Introduction. Nonlinear Least Squares. Computer Numerics. Reparameterization. Identification. Sampling Properties of NLS Estimators. Estimating Sigma 2. Hypothesis Testing. Conclusions. 14. HETEROSKEDASTICITY. Consequences for Ordinary Least Squares. Heteroskedasticity-Robust Tests. Weighted Least Squares. Testing for Heteroskedasticity. 15. TIME SERIES: SOME BASIC CONCEPTS. Introduction. White Noise. Measuring Temporal Dependence. Stationarity and Nonstationarity. Trend Stationary Processes. A Random Walk. A Random Walk with Drift. Key Properties of Random Walks. Conclusions. 16. FLUCTUATIONS. Introduction. Moving Average Processes. Autoregressive Processes. The Stationarity Condition. Key Properties of Moving Average and Autoregressive Processes. Autoregressive-Moving Average Processes. 17. TRENDS. The Constant Growth Model Revisited. Trend and Difference Stationary Processes. Testing for Stochastic Trends. Higher Orders of Integration. 18. COINTEGRATION. Long Run Relationships Between Variables. Relationships Between Variables. The Arithmetic of Integrated Processes. Cointegration. The Engle-Granger Test for Cointegration. Testing Restrictions on the Cointegrating Vector. Error Correction Models. The ECM of VAR. Cointegrating Rank. Conclusions and Further Reading. APPENDIX A: LAWS OF SUMMATION AND DEVIATION FORM. Laws of Summation. Laws of Deviation Form. APPENDIX B: DISTRIBUTION THEORY. Random Variables and Probability Distribution. Mathematical Expectation. Expected Value of a Function. Variance. Variance of a Function. Standardized Random Variables. Bivariate Distributions. Conditional Distributions and Expectation. Statistical Independence. Functions of Two Random Variables. Variance of a Linear Combination. Laws of Expectation and Variance: A Summary. ON CD-ROM C: PORTFOLIO THEORY AND THE CAPM (OPTIONAL). Introduction. Risky Assets. Portfolios. The Markowitz Frontier. The Tobin Frontier. The Diversification Effect. Portfolio Optimization. Further Reading. REFERENCES. STATISTICAL TABLES.

Journal ArticleDOI
TL;DR: In this article, three general classes of state space models are presented, using the single source of error formulation, to provide exact analytic (matrix) expressions for forecast error variances that can be used to construct prediction intervals.
Abstract: Three general classes of state space models are presented, using the single source of error formulation. The first class is the standard linear model with homoscedastic errors, the second retains the linear structure but incorporates a dynamic form of heteroscedasticity, and the third allows for non-linear structure in the observation equation as well as heteroscedasticity. These three classes provide stochastic models for a wide variety of exponential smoothing methods. We use these classes to provide exact analytic (matrix) expressions for forecast error variances that can be used to construct prediction intervals one or multiple steps ahead. These formulas are reduced to non-matrix expressions for 15 state space models that underlie the most common exponential smoothing methods. We discuss relationships between our expressions and previous suggestions for finding forecast error variances and prediction intervals for exponential smoothing methods. Simpler approximations are developed for the more complex schemes and their validity examined. The paper concludes with a numerical example using a non-linear model. Copyright © 2005 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors propose a new class of specification tests for time series conditional mean models, where the dimension of the conditioning information set may be infinite Both linear and nonlinear conditional mean specifications are covered, and the tests can detect a wide range of model misspecifications in mean.
Abstract: Economic theories in time series contexts usually have implications on and only on the conditional mean dynamics of underlying economic variables We propose a new class of specification tests for time series conditional mean models, where the dimension of the conditioning information set may be infinite Both linear and nonlinear conditional mean specifications are covered The tests can detect a wide range of model misspecifications in mean while being robust to conditional heteroscedasticity and higher order time-varying moments of unknown form They check a large number of lags, but naturally discount higher order lags, which is consistent with the stylized fact that economic behaviours are more affected by the recent past events than by the remote past events No specific estimation method is required, and the tests have the appealing "nuisance parameter free" property that parameter estimation uncertainty has no impact on the limit distribution of the tests A simulation study shows that it is important to take into account the impact of conditional heteroscedasticity; failure to do so will cause overrejection of a correct conditional mean model In a horse race competition on testing linearity in mean, our tests have omnibus and robust power against a variety of alternatives relative to some existing tests In an application, we find that after removing significant but possibly spurious autocorrelations due to nonsynchronous trading, there still exists significant predictable nonlinearity in mean for S&P 500 and NASDAQ daily returns

Posted Content
TL;DR: In this paper, the implied factor GARCH model is used to test the number of factors needed to model the conditional heteroskedasticity in the considered time series vector assuming normally distributed errors.
Abstract: The paper studies a factor GARCH model and develops test procedures which can be used to test the number of factors needed to model the conditional heteroskedasticity in the considered time series vector Assuming normally distributed errors the parameters of the model can be straightforwardly estimated by the method of maximum likelihood Inefficient but computationally simple preliminary estimates are first obtained and used as initial values to maximize the likelihood function Maximum likelihood estimation with nonnormal errors is also straightforward Motivated by the empirical application of the paper a mixture of normal distributions is considered An interesting feature of the implied factor GARCH model is that some parameters of the conditional covariance matrix which are not identifiable in the case of normal errors become identifiable when the mixture distribution is used As an empirical example we consider a system of four exchange rate return series

Book
14 Aug 2005
TL;DR: In this paper, the linear regression model is used to estimate the slope and intercept of a straight line and the residuals of the straight line, and the Gauss-Markov Theorem is applied.
Abstract: OVERVIEW Part I - The Linear Regression Model 1. What is Econometrics? 2. Choosing Estimators: Intuition and Monte Carlo Methods 3. Linear Estimators and the Gauss--Markov Theorem 4. Blue Estimators for the Slope and Intercept of a Straight Line 5. Residuals 6. Multiple Regression Part II - Specification and Hypothesis Testing 7. Testing Single Hypotheses in Regression Models 8. Superfluous and Omitted Variables, Multicollinearity and Binary Variables 9. Testing Multiple Hypotheses Part III - Further Topics in Regression 10. Heteroskedastic Disturbances 11. Autoregressive Disturbances 12. Large Sample Properties Of Estimators: Consistency and Asymptotic Efficiency 13. Instrumental Variables Estimation 14. Systems of Equations 15. Randomized Experiments and Natural Experiments 16. Analyzing Panel Data 17. Forecasting 18. Stochastically Trending Variables 19. Logit and Probit Models: Truncated and Censored Samples Statistical Appendix WEB EXTENSION 1 USING CALCULUS AND ALGEBRA FOR THE SIMPLEST CASE: n = 3 WEB EXTENSION 2 LOCAL AVERAGE TREATMENT EFFECTS WEB EXTENSION 3 GENERALIZED METHOD OF MOMENTS ESTIMATORS AND IDENTIFICATION WEB EXTENSION 4 MAXIMUM LIKELIHOOD ESTIMATION WEB EXTENSION 5 ESTIMATORS FOR SYSTEMS OF EQUATIONS WEB EXTENSION 6 MULTIPLE COINTEGRATING RELATIONSHIPS WEB EXTENSION 7 LOG-ODDS AND LOGIT MODELS: USING GROUPED DATA WEB EXTENSION 8 MULTINOMIAL MODELS

Journal ArticleDOI
TL;DR: In this article, the authors propose tests for hypotheses on the parameters of the deterministic trend function of a univariate time series, which do not require knowledge of the form of serial correlation in the data and are robust to strong serial correlation.
Abstract: We propose tests for hypotheses on the parameters of the deterministic trend function of a univariate time series. The tests do not require knowledge of the form of serial correlation in the data, and they are robust to strong serial correlation. The data can contain a unit root and still have the correct size asymptotically. The tests that we analyze are standard heteroscedasticity autocorrelation robust tests based on nonparametric kernel variance estimators. We analyze these tests using the fixed-b asymptotic framework recently proposed by Kiefer and Vogelsang. This analysis allows us to analyze the power properties of the tests with regard to bandwidth and kernel choices. Our analysis shows that among popular kernels, specific kernel and bandwidth choices deliver tests with maximal power within a specific class of tests. Based on the theoretical results, we propose a data-dependent bandwidth rule that maximizes integrated power. Our recommended test is shown to have power that dominates a related test...

Journal ArticleDOI
TL;DR: In this article, generalized method of moments (GMM) estimators that use heteroskedasticity and autocorrelation consistent (HAC) positive definite weight matrices and generalized empirical likelihood estimators based on smoothed moment conditions are analyzed.
Abstract: For stationary time series models with serial correlation, we consider generalized method of moments (GMM) estimators that use heteroskedasticity and autocorrelation consistent (HAC) positive definite weight matrices and generalized empirical likelihood (GEL) estimators based on smoothed moment conditions. Following the analysis of Newey and Smith (2004) for independent observations, we derive second order asymptotic biases of these estimators. The inspection of bias expressions reveals that the use of smoothed GEL, in contrast to GMM, removes the bias component associated with the correlation between the moment function and its derivative, while the bias component associated with third moments depends on the employed kernel function. We also analyze the case of no serial correlation, and find that the seemingly unnecessary smoothing and HAC estimation can reduce the bias for some of the estimators.

Journal ArticleDOI
TL;DR: In this paper, the authors describe how to specify, estimate, and test multiple-equation, fixed-effect, panel-data equations in Stata by specifying the system of equations as seemingly unrelated regressions.
Abstract: This paper describes how to specify, estimate, and test multiple-equation, fixed-effect, panel-data equations in Stata. By specifying the system of equations as seemingly unrelated regressions, Sta...

Journal ArticleDOI
01 Jul 2005
TL;DR: This article showed that many financial time series display volatility clustering, i.e., autoregressive conditional heteroskedastic (CARH) models with a constant one period forecast variance.
Abstract: Traditional econometric models assume a constant one period forecast variance. However, many financial time series display volatility clustering, that is, autoregressive conditional heteroskedastic...

Posted Content
TL;DR: In this paper, the authors used the All Ordinaries Index and the corresponding Share Price Index futures contract to estimate optimal hedge ratios, adopting several specifications such as an ordinary least squares-based model, a vector autoregression, vector error-correction model and a diagonal-vec multivariate generalized autoregressive conditional heteroscedasticity model.
Abstract: We use the All Ordinaries Index and the corresponding Share Price Index futures contract written against the All Ordinaries Index to estimate optimal hedge ratios, adopting several specifications: an ordinary least squares-based model, a vector autoregression, a vector error-correction model and a diagonal-vec multivariate generalized autoregressive conditional heteroscedasticity model. Hedging effectiveness is measured using a risk-return comparison and a utility maximization method. We find that time-varying generalized autoregressive conditional heteroscedasticity hedge ratios perform better than constant hedge ratios in terms of minimizing risks, but when return effects are also considered, the utility-based measure prefers the ordinary least squares method in the in-sample hedge, whilst both approaches favour the conditional time-varying multivariate generalized autoregressive conditional heteroscedasticity hedge ratio estimates in out-of-sample analyses.

Journal ArticleDOI
TL;DR: In this paper, a two-order spatio-temporal autoregressive model was developed to deal with both the spatiotemporal autocorrelations and the heteroscedasticity problem arising from the nature of multi-unit residential real estate data.
Abstract: By splitting the spatial effects into building and neighborhood effects, this paper develops a two order spatio-temporal autoregressive model to deal with both the spatio-temporal autocorrelations and the heteroscedasticity problem arising from the nature of multi-unit residential real estate data. The empirical results based on 54,282 condominium transactions in Singapore between 1990 and 1999 show that in the multi-unit residential market, a two order spatio-temporal autoregressive model incorporates more spatial information into the model, thus outperforming the models originally developed in the market for single-family homes. This implies that the specification of a spatio-temporal model should consider the physical market structure as it affects the spatial process. It is found that the Bayesian estimation method can produce more robust coefficients by efficiently detecting and correcting heteroscedasticity, indicating that the Bayesian estimation method is more suitable for estimating a real estate hedonic model than the conventional OLS estimation. It is also found that there is a trade off between the heteroscedastic robustness and the incorporation of spatial information into the model estimation. The model is then used to construct building-specific price indices. The results show that the price indices for different condominiums and the buildings within a condominium do behave differently, especially when compared with the aggregate market indices.

Journal ArticleDOI
TL;DR: In this article, the authors address estimation of the parameters in linear autoregressive models in the presence of additive and uncorrelated measurement errors, allowing heteroscedasticity in the measurement error variances.
Abstract: Time series data are often subject to measurement error, usually the result of needing to estimate the variable of interest. Although it is often reasonable to assume that the measurement error is additive (i.e., the estimator is conditionally unbiased for the missing true value), the measurement error variances often vary as a result of changes in the population/process over time and/or changes in sampling effort. In this article we address estimation of the parameters in linear autoregressive models in the presence of additive and uncorrelated measurement errors, allowing heteroscedasticity in the measurement error variances. We establish the asymptotic properties of naive estimators that ignore measurement error and propose an estimator based on correcting the Yule–Walker estimating equations. We also examine a pseudo-likelihood method based on normality assumptions and computed using the Kalman filter. We review other techniques that have been proposed, including two that require no information about ...

Journal ArticleDOI
TL;DR: In this article, the authors used the All Ordinaries Index and the corresponding Share Price Index futures contract to estimate optimal hedge ratios, adopting several specifications such as an ordinary least squares-based model, a vector autoregression, vector error-correction model and a diagonal-vec multivariate generalized autoregressive conditional heteroscedasticity model.
Abstract: We use the All Ordinaries Index and the corresponding Share Price Index futures contract written against the All Ordinaries Index to estimate optimal hedge ratios, adopting several specifications: an ordinary least squares-based model, a vector autoregression, a vector error-correction model and a diagonal-vec multivariate generalized autoregressive conditional heteroscedasticity model. Hedging effectiveness is measured using a risk-return comparison and a utility maximization method. We find that time-varying generalized autoregressive conditional heteroscedasticity hedge ratios perform better than constant hedge ratios in terms of minimizing risks, but when return effects are also considered, the utility-based measure prefers the ordinary least squares method in the in-sample hedge, whilst both approaches favour the conditional time-varying multivariate generalized autoregressive conditional heteroscedasticity hedge ratio estimates in out-of-sample analyses.

Journal ArticleDOI
TL;DR: In this paper, the authors present two applications of extreme value theory (EVT) to financial markets: computation of value at risk (VaR) and cross-section dependence of extreme returns (i.e., tail dependence).

Posted Content
TL;DR: In this article, a number of univariate and multivariate ARCH models, their estimating methods and the characteristics of financial time series, which are captured by volatility models, are presented, and a systematic presentation of the models that have been considered in the ARCH literature can be useful in guiding one's choice of a model for exploiting future volatility.
Abstract: Autoregressive Conditional Heteroscedasticity (ARCH) models have successfully been employed in order to predict asset return volatility. Predicting volatility is of great importance in pricing financial derivatives, selecting portfolios, measuring and managing investment risk more accurately. In this paper, a number of univariate and multivariate ARCH models, their estimating methods and the characteristics of financial time series, which are captured by volatility models, are presented. The number of possible conditional volatility formulations is vast. Therefore, a systematic presentation of the models that have been considered in the ARCH literature can be useful in guiding one's choice of a model for exploiting future volatility, with applications in financial markets.