scispace - formally typeset

Journal ArticleDOI

The econometrics of ultra-high-frequency data

01 Jan 2000-Econometrica (Blackwell Publishers Ltd)-Vol. 68, Iss: 1, pp 1-22

Abstract: Ultra-high-frequency data is defined to be a full record of transactions and their associated characteristics. The transaction arrival times and accompanying measures can be analyzed as marked point processes. The ACD point process developed by Engle and Russell (1998) is applied to IBM transactions arrival times to develop semiparametric hazard estimates and conditional intensities. Combining these intensities with a GARCH model of prices produces ultra-high-frequency measures of volatility. Both returns and variances are found to be negatively influenced by long durations as suggested by asymmetric information models of market micro-structure.
Citations
More filters

Journal ArticleDOI
Abstract: A rapidly growing literature has documented important improvements in financial return volatility measurement and forecasting via use of realized variation measures constructed from high-frequency returns coupled with simple modeling procedures. Building on recent theoretical results in Barndorff-Nielsen and Shephard (2004a, 2005) for related bi-power variation measures, the present paper provides a practical and robust framework for non-parametrically measuring the jump component in asset return volatility. In an application to the DM/$ exchange rate, the S&P500 market index, and the 30-year U.S. Treasury bond yield, we find that jumps are both highly prevalent and distinctly less persistent than the continuous sample path variation process. Moreover, many jumps appear directly associated with specific macroeconomic news announcements. Separating jump from non-jump movements in a simple but sophisticated volatility forecasting model, we find that almost all of the predictability in daily, weekly, and monthly return volatilities comes from the non-jump component. Our results thus set the stage for a number of interesting future econometric developments and important financial applications by separately modeling, forecasting, and pricing the continuous and jump components of the total return variation process.

1,019 citations


Journal ArticleDOI
Eric Ghysels1, Arthur Sinko1, Rossen Valkanov2Institutions (2)
Abstract: We explore mixed data sampling (henceforth MIDAS) regression models The regressions involve time series data sampled at different frequencies Volatility and related processes are our prime focus, though the regression method has wider applications in macroeconomics and finance, among other areas The regressions combine recent developments regarding estimation of volatility and a not-so-recent literature on distributed lag models We study various lag structures to parameterize parsimoniously the regressions and relate them to existing models We also propose several new extensions of the MIDAS framework The paper concludes with an empirical section where we provide further evidence and new results on the risk–return trade-off We also report empirical evidence on microstructure noise and volatility forecasting

689 citations


Journal ArticleDOI
Robert F. Engle1Institutions (1)
Abstract: In the 20 years following the publication of the ARCH model, there has been a vast quantity of research uncovering the properties of competing volatility models. Wide-ranging applications to financial data have discovered important stylized facts and illustrated both the strengths and weaknesses of the models. There are now many surveys of this literature. This paper looks forward to identify promising areas of new research. The paper lists five new frontiers. It briefly discusses three—high-frequency volatility models, large-scale multivariate ARCH models, and derivatives pricing models. Two further frontiers are examined in more detail—application of ARCH models to the broad class of non-negative processes, and use of Least Squares Monte Carlo to examine non-linear properties of any model that can be simulated. Using this methodology, the paper analyses more general types of ARCH models, stochastic volatility models, long-memory models and breaking volatility models. The volatility of volatility is defined, estimated and compared with option-implied volatilities. Copyright © 2002 John Wiley & Sons, Ltd.

648 citations


Journal ArticleDOI
TL;DR: The authors must take risks to achieve rewards but not all risks are equally rewarded, so they optimize their behavior, and in particular their portfolio, to maximize rewards and minimize risks.
Abstract: The advantage of knowing about risks is that we can change our behavior to avoid them. Of course, it is easily observed that to avoid all risks would be impossible; it might entail no flying, no driving, no walking, eating and drinking only healthy foods and never being touched by sunshine. Even a bath could be dangerous. I could not receive this prize if I sought to avoid all risks. There are some risks we choose to take because the benefits from taking them exceed the possible costs. Optimal behavior takes risks that are worthwhile. This is the central paradigm of finance; we must take risks to achieve rewards but not all risks are equally rewarded. Both the risks and the rewards are in the future, so it is the expectation of loss that is balanced against the expectation of reward. Thus we optimize our behavior, and in particular our portfolio, to maximize rewards and minimize risks.

496 citations


Journal ArticleDOI
Jean Jacod1, Yingying Li2, Per A. Mykland2, Mark Podolskij3  +1 moreInstitutions (4)
Abstract: This paper presents a generalized pre-averaging approach for estimating the integrated volatility. This approach also provides consistent estimators of other powers of volatility – in particular, it gives feasible ways to consistently estimate the asymptotic variance of the estimator of the integrated volatility. We show that our approach, which possess an intuitive transparency, can generate rate optimal estimators (with convergence rate n 1/4 ).

462 citations


References
More filters

01 Jan 1972
Abstract: The drum mallets disclosed are adjustable, by the percussion player, as to weight and/or balance and/or head characteristics, so as to vary the "feel" of the mallet, and thus also the tonal effect obtainable when playing upon kettle-drums, snare-drums, and other percussion instruments; and, typically, the mallet has frictionally slidable, removable and replaceable, external balancing mass means, positionable to serve as the striking head of the mallet, whereby the adjustment as to balance, overall weight, head characteristics and tone production may be readily obtained. In some forms, the said mass means regularly serves as a removable and replaceable striking head; while in other forms, the mass means comprises one or more thin elongated tubes having a frictionally-gripping fit on an elongated mallet body, so as to be manually slidable thereon but tight enough to avoid dislodgment under normal playing action; and such a tubular member may be slidable to the head-end of the mallet to serve as a striking head or it may be slidable to a position to serve as a hand grip; and one or more such tubular members may be placed in various positions along the length of the mallet. The mallet body may also have a tapered element at the head-end to assure retention of mass members especially of enlarged-head types; and the disclosure further includes such heads embodying a relatively hard inner portion and a relatively soft outer covering.

10,148 citations


Journal ArticleDOI
Abstract: The presence of traders with superior information leads to a positive bid-ask spread even when the specialist is risk-neutral and makes zero expected profits The resulting transaction prices convey information, and the expectation of the average spread squared times volume is bounded by a number that is independent of insider activity The serial correlation of transaction price differences is a function of the proportion of the spread due to adverse selection A bid-ask spread implies a divergence between observed returns and realizable returns Observed returns are approximately realizable returns plus what the uninformed anticipate losing to the insiders

5,759 citations


Journal ArticleDOI
Abstract: Preface.1. Introduction.1.1 Failure Time Data.1.2 Failure Time Distributions.1.3 Time Origins, Censoring, and Truncation.1.4 Estimation of the Survivor Function.1.5 Comparison of Survival Curves.1.6 Generalizations to Accommodate Delayed Entry.1.7 Counting Process Notation.Bibliographic Notes.Exercises and Complements.2. Failure Time Models.2.1 Introduction.2.2 Some Continuous Parametric Failure Time Models.2.3 Regression Models.2.4 Discrete Failure Time Models.Bibliographic Notes.Exercises and Complements.3. Inference in Parametric Models and Related Topics.3.1 Introduction.3.2 Censoring Mechanisms.3.3 Censored Samples from an Exponential Distribution.3.4 Large-Sample Likelihood Theory.3.5 Exponential Regression.3.6 Estimation in Log-Linear Regression Models.3.7 Illustrations in More Complex Data Sets.3.8 Discrimination Among Parametric Models.3.9 Inference with Interval Censoring.3.10 Discussion.Bibliographic Notes.Exercises and Complements.4. Relative Risk (Cox) Regression Models.4.1 Introduction.4.2 Estimation of beta.4.3 Estimation of the Baseline Hazard or Survivor Function.4.4 Inclusion of Strata.4.5 Illustrations.4.6 Counting Process Formulas. 4.7 Related Topics on the Cox Model.4.8 Sampling from Discrete Models.Bibliographic Notes.Exercises and Complements.5. Counting Processes and Asymptotic Theory.5.1 Introduction.5.2 Counting Processes and Intensity Functions.5.3 Martingales.5.4 Vector-Valued Martingales.5.5 Martingale Central Limit Theorem.5.6 Asymptotics Associated with Chapter 1.5.7 Asymptotic Results for the Cox Model.5.8 Asymptotic Results for Parametric Models.5.9 Efficiency of the Cox Model Estimator.5.10 Partial Likelihood Filtration.Bibliographic Notes.Exercises and Complements.6. Likelihood Construction and Further Results.6.1 Introduction.6.2 Likelihood Construction in Parametric Models.6.3 Time-Dependent Covariates and Further Remarks on Likelihood Construction.6.4 Time Dependence in the Relative Risk Model.6.5 Nonnested Conditioning Events.6.6 Residuals and Model Checking for the Cox Model.Bibliographic Notes.Exercises and Complements.7. Rank Regression and the Accelerated Failure Time Model.7.1 Introduction.7.2 Linear Rank Tests.7.3 Development and Properties of Linear Rank Tests.7.4 Estimation in the Accelerated Failure Time Model.7.5 Some Related Regression Models.Bibliographic Notes.Exercises and Complements.8. Competing Risks and Multistate Models.8.1 Introduction.8.2 Competing Risks.8.3 Life-History Processes.Bibliographic Notes.Exercises and Complements.9. Modeling and Analysis of Recurrent Event Data.9.1 Introduction.9.2 Intensity Processes for Recurrent Events.9.3 Overall Intensity Process Modeling and Estimation.9.4 Mean Process Modeling and Estimation.9.5 Conditioning on Aspects of the Counting Process History.Bibliographic Notes.Exercises and Complements.10. Analysis of Correlated Failure Time Data.10.1 Introduction.10.2 Regression Models for Correlated Failure Time Data.10.3 Representation and Estimation of the Bivariate Survivor Function.10.4 Pairwise Dependency Estimation.10.5 Illustration: Australian Twin Data.10.6 Approaches to Nonparametric Estimation of the Bivariate Survivor Function.10.7 Survivor Function Estimation in Higher Dimensions.Bibliographic Notes.Exercises and Complements.11. Additional Failure Time Data Topics.11.1 Introduction.11.2 Stratified Bivariate Failure Time Analysis.11.3 Fixed Study Period Survival Studies.11.4 Cohort Sampling and Case-Control Studies.11.5 Missing Covariate Data.11.6 Mismeasured Covariate Data.11.7 Sequential Testing with Failure Time Endpoints.11.8 Bayesian Analysis of the Proportional Hazards Model.11.9 Some Analyses of a Particular Data Set.Bibliographic Notes.Exercises and Complements.Glossary of Notation.Appendix A: Some Sets of Data.Appendix B: Supporting Technical Material.Bibliography.Author Index.Subject Index.

3,596 citations


Journal ArticleDOI
Abstract: We study the properties of the quasi-maximum likelihood estimator (QMLE) and related test statistics in dynamic models that jointly parameterize conditional means and conditional covariances, when a normal log-likelihood os maximized but the assumption of normality is violated. Because the score of the normal log-likelihood has the martingale difference property when the forst two conditional moments are correctly specified, the QMLE is generally Consistent and has a limiting normal destribution. We provide easily computable formulas for asymptotic standard errors that are valid under nonnormality. Further, we show how robust LM tests for the adequacy of the jointly parameterized mean and variance can be computed from simple auxiliary regressions. An appealing feature of these robyst inference procedures is that only first derivatives of the conditional mean and variance functions are needed. A monte Carlo study indicates that the asymptotic results carry over to finite samples. Estimation of several AR a...

3,456 citations


Journal ArticleDOI
Anat R. Admati1, Paul Pfleiderer1Institutions (1)
Abstract: This article develops a theory in which concentrated-trading patterns arise endogenously as a result of the strategic behavior of liquidity traders and informed traders. Our results provide a partial explanation for some of the recent empitical findings concerning the patterns of volume and price variability in intraday transaction data. In the last few years, intraday trading data for a number of securities have become available. Several empirical studies have used these data to identify various patterns in trading volume and in the daily behavior of security prices. This article focuses on two of these patterns; trading volume and the variability of returns. Consider, for example, the data in Table 1 concerning shares of Exxon traded during 1981.1 The U-shaped pattern of the average volume of shares traded-namely, the heavy trading in the beginning and the end of the trading day and the relatively light trading in the middle of the day-is very typical and has been documented in a number of studies. [For example,Jain andJoh (1986) examine hourly data for the aggregate volume on the NYSE, which is reported in the Wall StreetJournal, and find the same pattern.] Both the variance of price changes

3,194 citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
202117
202017
201919
20189
201720
201630