How Often to Sample a Continuous-Time Process in the Presence of Market Microstructure Noise
read more
Citations
A Tale of Two Time Scales: Determining Integrated Volatility With Noisy High-Frequency Data
Designing realised kernels to measure the ex-post variation of equity prices in the presence of noise ∗
Roughing It Up: Including Jump Components in the Measurement, Modeling, and Forecasting of Return Volatility
Real-Time Price Discovery in Global Stock, Bond and Foreign Exchange Markets
Realized Variance and Market Microstructure Noise
References
Handbook of Mathematical Functions
Generalized Linear Models
Time series analysis
Related Papers (5)
Modeling and forecasting realized volatility
Frequently Asked Questions (12)
Q2. What is the likelihood function of the market microstructure noise?
Suppose that, instead of being iid with mean 0 and variance a2, the market microstructure noise follows dUt = −bUtdt+ cdZt where b > 0, c > 0 and Z is a Brownian motion independent ofW.
Q3. What makes their situation unusual relative to quasi-likelihood?
What makes their situation unusual relative to quasi-likelihood is that the interest parameter σ2 and the nuisance parameter a2 are entangled in the same estimating equations (l̇σ2 and l̇a2 from the Gaussian likelihood) in such a way that the estimate of σ2 depends, to first order, on whether a2 is known or not.
Q4. What is the likelihood function for the Y 0s?
The likelihood function for the Y 0s is then given byl(η, γ2) = − ln det(V )/2−N ln(2πγ2)/2− (2γ2)−1Y 0V −1Y, (4.1)where the covariance matrix for the vector Y = (Y1, ..., YN)0 is given by γ2V , whereV = [vij ]i,j=1,...,N = 1 + η2 η 0 · · · 0 η 1 + η2 η . . . ... 0 η 1 + η2 . . . 0 ... . . . . . . . . . η0 · · · 0 η 1 + η2 (4.2)Further,det(V ) = 1− η2N+2 1− η2 (4.3)and, neglecting the end effects, an approximate inverse of V is the matrix Ω = [ωij]i,j=1,...,N whereωij = ¡ 1− η2¢−1 (−η)|i−j|(see Durbin (1959)).
Q5. What is the likelihood function of the X process?
Wτ i+1 −Wτ i ¢ are then iid N(0, σ2∆) so the likelihood function isl(σ2) = −N ln(2πσ2∆)/2− (2σ2∆)−1Y 0Y, (2.1)where Y = (Y1, ..., YN)0..
Q6. What is the way to solve the problem of the noise term?
The authors then addressed the issue of what to do about it, and showed that modelling the noise term explicitly restores the first order statistical effect that sampling as often as possible is optimal.
Q7. What is the way to sample the noise term?
Their first finding in the paper is that there are situations where the presence of market microstructure noise makes it optimal to sample less often than would otherwise be the case in the absence of noise.
Q8. What is the maximum likelihood of estimating 2?
The maximum-likelihood estimator of σ2 coincides with the discrete approximation to the quadratic variation of the processσ̂2 = 1T NX i=1 Y 2i (2.2)which has the following exact small sample moments:E £ σ̂2 ¤ = 1T NX i=1 E £ Y 2i ¤ =N ¡ σ2∆ ¢ T = σ2,V ar £ σ̂2 ¤ = 1T 2 V ar " NX i=1 Y 2i # = 1 T 2 Ã NX i=1 V ar £ Y 2i ¤! = N T 2 ¡ 2σ4∆2 ¢ = 2σ4∆ Tand the following asymptotic distributionT 1/2 ¡ σ̂2 − σ2¢ −→T−→∞ N(0, ω) (2.3)whereω = AV AR(σ̂2) = ∆E h −l̈(σ2) i−1 = 2σ4∆. (2.4)Thus selecting ∆ as small as possible is optimal for the purpose of estimating σ2.
Q9. What is the second order correction term in the asymptotic variance?
A(2) is the base correction term present even with Gaussian noise in Theorem 2, and Cum4(U)B(2) is the further correction due to the sampling randomness.
Q10. What is the caveat in the case of a serially correlated U?
The same caveat as in serially correlated U case applies: having modified the matrix γ2V, the artificial “normal” distribution would no longer use the correct second moment structure of the data.
Q11. What is the usual estimate of asymptotic variance?
ThenV̂ = \\AVARnormal = µ − 1 T l̈(σ̂2, â2) ¶−1 is the usual estimate of asymptotic variance when the distribution is correctly specified as Gaussian.
Q12. What is the asymptotic variance of the estimators?
D−1 σ2,a2 Sσ2,a2D −1 σ2,a2 . (8.3)The asymptotic variance of (σ̂2, â2) is thus the same as if µ were known, in other words, as if µ = 0, which is the case that the authors focused on in all the previous sections.