scispace - formally typeset
Search or ask a question
Author

Per A. Mykland

Other affiliations: Humboldt University of Berlin
Bio: Per A. Mykland is an academic researcher from University of Chicago. The author has contributed to research in topics: Estimator & Volatility (finance). The author has an hindex of 41, co-authored 102 publications receiving 8742 citations. Previous affiliations of Per A. Mykland include Humboldt University of Berlin.


Papers
More filters
ReportDOI
TL;DR: Under this framework, it becomes clear why and where the “usual” volatility estimator fails when the returns are sampled at the highest frequencies, and a way of finding the optimal sampling frequency for any size of the noise.
Abstract: It is a common nancial practice to estimate volatility from the sum of frequently-sampled squared returns. However market microstructure poses challenge to this estimation approach, as evidenced by recent empirical studies in nance. This work attempts to lay out theoretical grounds that reconcile continuous-time modeling and discrete-time samples. We propose an estimation approach that takes advantage of the rich sources in tick-by-tick data while preserving the continuous-time assumption on the underlying returns. Under our framework, it becomes clear why and where the \usual" volatility estimator fails when the returns are sampled at the highest frequency.

1,724 citations

Journal ArticleDOI
TL;DR: In this article, the authors show that the optimal sampling frequency is finite and derive its closed-form expression, and demonstrate that modelling the noise and using all the data is a better solution, even if one misspecifies the noise distribution.
Abstract: In theory, the sum of squares of log returns sampled at high frequency estimates their variance. When market microstructure noise is present but unaccounted for, however, we show that the optimal sampling frequency is finite and derive its closed-form expression. But even with optimal sampling, using say five minute returns when transactions are recorded every second, a vast amount of data is discarded, in contradiction to basic statistical principles. We demonstrate that modelling the noise and using all the data is a better solution, even if one misspecifies the noise distribution. So the answer is: sample as often as possible.

820 citations

Journal ArticleDOI
TL;DR: In this paper, the authors introduce a nonparametric test to detect jump arrival times and realized jump sizes in asset prices up to the intra-day level, and demonstrate that the likelihood of misclassification of jumps becomes negligible when using high-frequency returns.
Abstract: This paper introduces a new nonparametric test to detect jump arrival times and realized jump sizes in asset prices up to the intra-day level. We demonstrate that the likelihood of misclassiflcation of jumps becomes negligible when we use high-frequency returns. Using our test, we examine jump dynamics and their distributions in the U.S. equity markets. The results show that individual stock jumps are associated with prescheduled earnings announcements and other company-speciflc news events. Additionally, S&P 500 Index jumps are associated with general market news announcements. This suggests difierent pricing models for individual equity options versus index op

810 citations

Journal ArticleDOI
TL;DR: In this article, the authors propose an estimation approach that takes advantage of the rich sources in tick-by-tick data while preserving the continuous-time assumption on the underlying returns.
Abstract: It is a common practice in finance to estimate volatility from the sum of frequently sampled squared returns. However, market microstructure poses challenges to this estimation approach, as evidenced by recent empirical studies in finance. The present work attempts to lay out theoretical grounds that reconcile continuous-time modeling and discrete-time samples. We propose an estimation approach that takes advantage of the rich sources in tick-by-tick data while preserving the continuous-time assumption on the underlying returns. Under our framework, it becomes clear why and where the “usual” volatility estimator fails when the returns are sampled at the highest frequencies. If the noise is asymptotically small, our work provides a way of finding the optimal sampling frequency. A better approach, the “two-scales estimator,” works for any size of the noise.

726 citations

Journal ArticleDOI
TL;DR: In this article, a generalized pre-averaging approach for estimating the integrated volatility is presented, which can generate rate optimal estimators with convergence rate n 1/4. But the convergence rate is not guaranteed.

525 citations


Cited by
More filters
Book
01 Jan 2009

8,216 citations

Book
01 Jan 1993
TL;DR: This second edition reflects the same discipline and style that marked out the original and helped it to become a classic: proofs are rigorous and concise, the range of applications is broad and knowledgeable, and key ideas are accessible to practitioners with limited mathematical background.
Abstract: Meyn & Tweedie is back! The bible on Markov chains in general state spaces has been brought up to date to reflect developments in the field since 1996 - many of them sparked by publication of the first edition. The pursuit of more efficient simulation algorithms for complex Markovian models, or algorithms for computation of optimal policies for controlled Markov models, has opened new directions for research on Markov chains. As a result, new applications have emerged across a wide range of topics including optimisation, statistics, and economics. New commentary and an epilogue by Sean Meyn summarise recent developments and references have been fully updated. This second edition reflects the same discipline and style that marked out the original and helped it to become a classic: proofs are rigorous and concise, the range of applications is broad and knowledgeable, and key ideas are accessible to practitioners with limited mathematical background.

5,931 citations

Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations

Journal ArticleDOI
TL;DR: This purpose of this introductory paper is to introduce the Monte Carlo method with emphasis on probabilistic machine learning and review the main building blocks of modern Markov chain Monte Carlo simulation.
Abstract: This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.

2,579 citations